Test Report: Docker_Linux_crio 21594

                    
                      532dacb4acf31553658ff6b0bf62fcf9309f2277:2025-09-19:41507
                    
                

Test fail (16/329)

x
+
TestAddons/parallel/Ingress (153.13s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-120954 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-120954 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-120954 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [a36d6f3f-a7ef-439c-8cd4-f5fb615a04a2] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [a36d6f3f-a7ef-439c-8cd4-f5fb615a04a2] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 8.003980658s
I0919 22:17:26.139887   18175 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-120954 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.970390261s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-120954 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-120954
helpers_test.go:243: (dbg) docker inspect addons-120954:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "519cf01b7111acb74ef56903ef111937adc5f1a2e0c41b595879c4f9da2633cd",
	        "Created": "2025-09-19T22:14:38.844603118Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 20138,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:14:38.887986447Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/519cf01b7111acb74ef56903ef111937adc5f1a2e0c41b595879c4f9da2633cd/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/519cf01b7111acb74ef56903ef111937adc5f1a2e0c41b595879c4f9da2633cd/hostname",
	        "HostsPath": "/var/lib/docker/containers/519cf01b7111acb74ef56903ef111937adc5f1a2e0c41b595879c4f9da2633cd/hosts",
	        "LogPath": "/var/lib/docker/containers/519cf01b7111acb74ef56903ef111937adc5f1a2e0c41b595879c4f9da2633cd/519cf01b7111acb74ef56903ef111937adc5f1a2e0c41b595879c4f9da2633cd-json.log",
	        "Name": "/addons-120954",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-120954:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-120954",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "519cf01b7111acb74ef56903ef111937adc5f1a2e0c41b595879c4f9da2633cd",
	                "LowerDir": "/var/lib/docker/overlay2/787c1fdcde926870299d096df578b21d7a9524b0282289e9f337878e41c37f05-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/787c1fdcde926870299d096df578b21d7a9524b0282289e9f337878e41c37f05/merged",
	                "UpperDir": "/var/lib/docker/overlay2/787c1fdcde926870299d096df578b21d7a9524b0282289e9f337878e41c37f05/diff",
	                "WorkDir": "/var/lib/docker/overlay2/787c1fdcde926870299d096df578b21d7a9524b0282289e9f337878e41c37f05/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-120954",
	                "Source": "/var/lib/docker/volumes/addons-120954/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-120954",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-120954",
	                "name.minikube.sigs.k8s.io": "addons-120954",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b745c80a9b7bca0f556cc267368a49fc51f78a4a160218ed586b07ddc0f3074e",
	            "SandboxKey": "/var/run/docker/netns/b745c80a9b7b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-120954": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "a6:80:55:f6:ac:eb",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "93509702cc9aef9fc4171d54bfc302cafa328fd9278f0b8a85d1018ba8531b55",
	                    "EndpointID": "6ba26ff175ec1adb3e29c406a79f029421a080f1eea50fb6e14aa645c8ca5fda",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-120954",
	                        "519cf01b7111"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-120954 -n addons-120954
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-120954 logs -n 25: (1.29357441s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-507391 --alsologtostderr --binary-mirror http://127.0.0.1:35061 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-507391 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ delete  │ -p binary-mirror-507391                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-507391 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ addons  │ enable dashboard -p addons-120954                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ addons  │ disable dashboard -p addons-120954                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ start   │ -p addons-120954 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:16 UTC │
	│ addons  │ addons-120954 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:16 UTC │ 19 Sep 25 22:16 UTC │
	│ addons  │ addons-120954 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ enable headlamp -p addons-120954 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-120954 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-120954 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-120954 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-120954 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-120954                                                                                                                                                                                                                                                                                                                                                                                           │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-120954 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ ip      │ addons-120954 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-120954 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ ssh     │ addons-120954 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │                     │
	│ ssh     │ addons-120954 ssh cat /opt/local-path-provisioner/pvc-9d1a6fb2-cfbb-47a0-a7e7-bc0b5a2d6b34_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-120954 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:18 UTC │
	│ addons  │ addons-120954 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-120954 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-120954 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:17 UTC │ 19 Sep 25 22:17 UTC │
	│ addons  │ addons-120954 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:18 UTC │ 19 Sep 25 22:18 UTC │
	│ addons  │ addons-120954 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:18 UTC │ 19 Sep 25 22:18 UTC │
	│ ip      │ addons-120954 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-120954        │ jenkins │ v1.37.0 │ 19 Sep 25 22:19 UTC │ 19 Sep 25 22:19 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:13
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:13.792144   19483 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:13.792374   19483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:13.792383   19483 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:13.792387   19483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:13.792563   19483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:14:13.793147   19483 out.go:368] Setting JSON to false
	I0919 22:14:13.793950   19483 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3404,"bootTime":1758316650,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:14:13.794052   19483 start.go:140] virtualization: kvm guest
	I0919 22:14:13.796576   19483 out.go:179] * [addons-120954] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:14:13.798588   19483 notify.go:220] Checking for updates...
	I0919 22:14:13.798639   19483 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:14:13.800654   19483 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:13.802295   19483 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:14:13.803789   19483 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:14:13.805152   19483 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:14:13.806575   19483 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:14:13.808043   19483 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:13.830002   19483 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:14:13.830091   19483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:13.887200   19483 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-19 22:14:13.877858111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:13.887326   19483 docker.go:318] overlay module found
	I0919 22:14:13.889499   19483 out.go:179] * Using the docker driver based on user configuration
	I0919 22:14:13.891196   19483 start.go:304] selected driver: docker
	I0919 22:14:13.891217   19483 start.go:918] validating driver "docker" against <nil>
	I0919 22:14:13.891228   19483 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:14:13.891798   19483 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:13.943776   19483 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-09-19 22:14:13.934592479 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:13.943962   19483 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:13.944201   19483 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:14:13.946884   19483 out.go:179] * Using Docker driver with root privileges
	I0919 22:14:13.948496   19483 cni.go:84] Creating CNI manager for ""
	I0919 22:14:13.948570   19483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 22:14:13.948584   19483 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:14:13.948673   19483 start.go:348] cluster config:
	{Name:addons-120954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-120954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: Network
Plugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}

                                                
                                                
	I0919 22:14:13.949991   19483 out.go:179] * Starting "addons-120954" primary control-plane node in "addons-120954" cluster
	I0919 22:14:13.951234   19483 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:14:13.952589   19483 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:14:13.953827   19483 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:14:13.953875   19483 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:14:13.953884   19483 cache.go:58] Caching tarball of preloaded images
	I0919 22:14:13.953951   19483 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:14:13.953969   19483 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:14:13.953977   19483 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:14:13.954350   19483 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/config.json ...
	I0919 22:14:13.954378   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/config.json: {Name:mkf6dd2454f799ede1957b533a7ed1bbd3d18580 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:13.970825   19483 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:13.971045   19483 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0919 22:14:13.971066   19483 image.go:68] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory, skipping pull
	I0919 22:14:13.971072   19483 image.go:137] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in cache, skipping pull
	I0919 22:14:13.971086   19483 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	I0919 22:14:13.971127   19483 cache.go:165] Loading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from local cache
	I0919 22:14:27.059145   19483 cache.go:167] successfully loaded and using gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 from cached tarball
	I0919 22:14:27.059186   19483 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:14:27.059224   19483 start.go:360] acquireMachinesLock for addons-120954: {Name:mkf4d97197f25399167f28c84cf178774acc9185 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:14:27.059354   19483 start.go:364] duration metric: took 105.907µs to acquireMachinesLock for "addons-120954"
	I0919 22:14:27.059406   19483 start.go:93] Provisioning new machine with config: &{Name:addons-120954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-120954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: S
ocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:14:27.059488   19483 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:14:27.061632   19483 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I0919 22:14:27.061881   19483 start.go:159] libmachine.API.Create for "addons-120954" (driver="docker")
	I0919 22:14:27.061922   19483 client.go:168] LocalClient.Create starting
	I0919 22:14:27.062168   19483 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:14:27.304908   19483 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:14:27.377598   19483 cli_runner.go:164] Run: docker network inspect addons-120954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:14:27.395145   19483 cli_runner.go:211] docker network inspect addons-120954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:14:27.395235   19483 network_create.go:284] running [docker network inspect addons-120954] to gather additional debugging logs...
	I0919 22:14:27.395268   19483 cli_runner.go:164] Run: docker network inspect addons-120954
	W0919 22:14:27.414178   19483 cli_runner.go:211] docker network inspect addons-120954 returned with exit code 1
	I0919 22:14:27.414206   19483 network_create.go:287] error running [docker network inspect addons-120954]: docker network inspect addons-120954: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-120954 not found
	I0919 22:14:27.414235   19483 network_create.go:289] output of [docker network inspect addons-120954]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-120954 not found
	
	** /stderr **
	I0919 22:14:27.414359   19483 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:14:27.432172   19483 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001a2f430}
	I0919 22:14:27.432216   19483 network_create.go:124] attempt to create docker network addons-120954 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:14:27.432275   19483 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-120954 addons-120954
	I0919 22:14:27.491821   19483 network_create.go:108] docker network addons-120954 192.168.49.0/24 created
	I0919 22:14:27.491859   19483 kic.go:121] calculated static IP "192.168.49.2" for the "addons-120954" container
	I0919 22:14:27.491952   19483 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:14:27.509288   19483 cli_runner.go:164] Run: docker volume create addons-120954 --label name.minikube.sigs.k8s.io=addons-120954 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:14:27.528326   19483 oci.go:103] Successfully created a docker volume addons-120954
	I0919 22:14:27.528399   19483 cli_runner.go:164] Run: docker run --rm --name addons-120954-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-120954 --entrypoint /usr/bin/test -v addons-120954:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:14:34.368543   19483 cli_runner.go:217] Completed: docker run --rm --name addons-120954-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-120954 --entrypoint /usr/bin/test -v addons-120954:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib: (6.840110861s)
	I0919 22:14:34.368572   19483 oci.go:107] Successfully prepared a docker volume addons-120954
	I0919 22:14:34.368615   19483 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:14:34.368634   19483 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:14:34.368695   19483 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-120954:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:14:38.771343   19483 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-120954:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.402610958s)
	I0919 22:14:38.771378   19483 kic.go:203] duration metric: took 4.402741797s to extract preloaded images to volume ...
	W0919 22:14:38.771488   19483 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:14:38.771517   19483 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:14:38.771552   19483 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:14:38.826791   19483 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-120954 --name addons-120954 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-120954 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-120954 --network addons-120954 --ip 192.168.49.2 --volume addons-120954:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:14:39.128416   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Running}}
	I0919 22:14:39.149185   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:39.168523   19483 cli_runner.go:164] Run: docker exec addons-120954 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:14:39.217794   19483 oci.go:144] the created container "addons-120954" has a running status.
	I0919 22:14:39.217825   19483 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa...
	I0919 22:14:39.473432   19483 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:14:39.503820   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:39.523173   19483 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:14:39.523198   19483 kic_runner.go:114] Args: [docker exec --privileged addons-120954 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:14:39.574630   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:39.595889   19483 machine.go:93] provisionDockerMachine start ...
	I0919 22:14:39.596010   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:39.617609   19483 main.go:141] libmachine: Using SSH client type: native
	I0919 22:14:39.617876   19483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 22:14:39.617891   19483 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:14:39.755140   19483 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-120954
	
	I0919 22:14:39.755169   19483 ubuntu.go:182] provisioning hostname "addons-120954"
	I0919 22:14:39.755226   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:39.774954   19483 main.go:141] libmachine: Using SSH client type: native
	I0919 22:14:39.775226   19483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 22:14:39.775246   19483 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-120954 && echo "addons-120954" | sudo tee /etc/hostname
	I0919 22:14:39.926689   19483 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-120954
	
	I0919 22:14:39.926785   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:39.945327   19483 main.go:141] libmachine: Using SSH client type: native
	I0919 22:14:39.945600   19483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 22:14:39.945626   19483 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-120954' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-120954/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-120954' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:14:40.081601   19483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:14:40.081632   19483 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:14:40.081678   19483 ubuntu.go:190] setting up certificates
	I0919 22:14:40.081696   19483 provision.go:84] configureAuth start
	I0919 22:14:40.081756   19483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-120954
	I0919 22:14:40.099261   19483 provision.go:143] copyHostCerts
	I0919 22:14:40.099333   19483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:14:40.099436   19483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:14:40.099497   19483 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:14:40.099545   19483 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.addons-120954 san=[127.0.0.1 192.168.49.2 addons-120954 localhost minikube]
	I0919 22:14:40.372009   19483 provision.go:177] copyRemoteCerts
	I0919 22:14:40.372065   19483 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:14:40.372125   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:40.389307   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:40.487416   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:14:40.514527   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:14:40.539551   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:14:40.564927   19483 provision.go:87] duration metric: took 483.220554ms to configureAuth
	I0919 22:14:40.564955   19483 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:14:40.565189   19483 config.go:182] Loaded profile config "addons-120954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:14:40.565299   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:40.583779   19483 main.go:141] libmachine: Using SSH client type: native
	I0919 22:14:40.583981   19483 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0919 22:14:40.584001   19483 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:14:40.829252   19483 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:14:40.829274   19483 machine.go:96] duration metric: took 1.233343222s to provisionDockerMachine
	I0919 22:14:40.829286   19483 client.go:171] duration metric: took 13.767353581s to LocalClient.Create
	I0919 22:14:40.829305   19483 start.go:167] duration metric: took 13.767423932s to libmachine.API.Create "addons-120954"
	I0919 22:14:40.829315   19483 start.go:293] postStartSetup for "addons-120954" (driver="docker")
	I0919 22:14:40.829328   19483 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:14:40.829398   19483 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:14:40.829447   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:40.850889   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:40.952763   19483 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:14:40.958228   19483 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:14:40.958263   19483 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:14:40.958286   19483 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:14:40.958293   19483 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:14:40.958309   19483 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:14:40.958375   19483 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:14:40.958400   19483 start.go:296] duration metric: took 129.078502ms for postStartSetup
	I0919 22:14:40.958712   19483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-120954
	I0919 22:14:40.978312   19483 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/config.json ...
	I0919 22:14:40.978600   19483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:14:40.978644   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:40.998612   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:41.092093   19483 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:14:41.096759   19483 start.go:128] duration metric: took 14.037254981s to createHost
	I0919 22:14:41.096787   19483 start.go:83] releasing machines lock for "addons-120954", held for 14.037416873s
	I0919 22:14:41.096858   19483 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-120954
	I0919 22:14:41.114236   19483 ssh_runner.go:195] Run: cat /version.json
	I0919 22:14:41.114283   19483 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:14:41.114288   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:41.114349   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:41.132845   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:41.133538   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:41.295717   19483 ssh_runner.go:195] Run: systemctl --version
	I0919 22:14:41.300293   19483 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:14:41.439868   19483 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:14:41.444965   19483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:14:41.469588   19483 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:14:41.469667   19483 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:14:41.501886   19483 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:14:41.501915   19483 start.go:495] detecting cgroup driver to use...
	I0919 22:14:41.501949   19483 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:14:41.501999   19483 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:14:41.518097   19483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:14:41.530188   19483 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:14:41.530241   19483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:14:41.544828   19483 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:14:41.559902   19483 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:14:41.625336   19483 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:14:41.697767   19483 docker.go:234] disabling docker service ...
	I0919 22:14:41.697831   19483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:14:41.715613   19483 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:14:41.727351   19483 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:14:41.795856   19483 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:14:41.906611   19483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:14:41.918685   19483 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:14:41.938192   19483 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:14:41.938257   19483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:41.951426   19483 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:14:41.951504   19483 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:41.962307   19483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:41.973239   19483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:41.984041   19483 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:14:41.994245   19483 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:42.004992   19483 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:42.021920   19483 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:14:42.032634   19483 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:14:42.041663   19483 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0919 22:14:42.041723   19483 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0919 22:14:42.055094   19483 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:14:42.064133   19483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:14:42.161934   19483 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:14:42.259223   19483 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:14:42.259311   19483 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:14:42.263041   19483 start.go:563] Will wait 60s for crictl version
	I0919 22:14:42.263097   19483 ssh_runner.go:195] Run: which crictl
	I0919 22:14:42.266539   19483 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:14:42.301588   19483 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:14:42.301697   19483 ssh_runner.go:195] Run: crio --version
	I0919 22:14:42.337795   19483 ssh_runner.go:195] Run: crio --version
	I0919 22:14:42.375297   19483 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:14:42.376668   19483 cli_runner.go:164] Run: docker network inspect addons-120954 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:14:42.394034   19483 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:14:42.398184   19483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:14:42.410036   19483 kubeadm.go:875] updating cluster {Name:addons-120954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-120954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVM
netPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:14:42.410168   19483 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:14:42.410233   19483 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:14:42.478202   19483 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:14:42.478223   19483 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:14:42.478265   19483 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:14:42.512093   19483 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:14:42.512135   19483 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:14:42.512146   19483 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:14:42.512261   19483 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-120954 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:addons-120954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:14:42.512346   19483 ssh_runner.go:195] Run: crio config
	I0919 22:14:42.554611   19483 cni.go:84] Creating CNI manager for ""
	I0919 22:14:42.554636   19483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 22:14:42.554653   19483 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:14:42.554674   19483 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-120954 NodeName:addons-120954 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:14:42.554784   19483 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-120954"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:14:42.554837   19483 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:14:42.564207   19483 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:14:42.564268   19483 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 22:14:42.573491   19483 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:14:42.592871   19483 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:14:42.615152   19483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I0919 22:14:42.635415   19483 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0919 22:14:42.639900   19483 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:14:42.652081   19483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:14:42.719745   19483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:14:42.743783   19483 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954 for IP: 192.168.49.2
	I0919 22:14:42.743806   19483 certs.go:194] generating shared ca certs ...
	I0919 22:14:42.743827   19483 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:42.743972   19483 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:14:43.023694   19483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt ...
	I0919 22:14:43.023726   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt: {Name:mk66b2820207c2cbd8ece630f979b3c034ee37fe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:43.023916   19483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key ...
	I0919 22:14:43.023931   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key: {Name:mk4ef090030f8b57f853bcd0007c035913979d03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:43.024051   19483 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:14:43.260538   19483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt ...
	I0919 22:14:43.260573   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt: {Name:mkd0340f8632b4b29a326dd9a28755512ad74052 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:43.260767   19483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key ...
	I0919 22:14:43.260783   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key: {Name:mka9160fccb954b65e0724c54254afcfdb3d3575 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:43.260888   19483 certs.go:256] generating profile certs ...
	I0919 22:14:43.260966   19483 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.key
	I0919 22:14:43.260986   19483 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt with IP's: []
	I0919 22:14:43.441985   19483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt ...
	I0919 22:14:43.442019   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: {Name:mke4d6d2e8ac3ae92b04839fcbc610dc6d9324d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:43.442210   19483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.key ...
	I0919 22:14:43.442226   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.key: {Name:mkf2970b8897ec21dba218ed1a3e89d2a9256969 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:43.442340   19483 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.key.326440d4
	I0919 22:14:43.442363   19483 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.crt.326440d4 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0919 22:14:43.531809   19483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.crt.326440d4 ...
	I0919 22:14:43.531849   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.crt.326440d4: {Name:mk3929fe3da0992befeb271b7bcbcca475051ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:43.532056   19483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.key.326440d4 ...
	I0919 22:14:43.532076   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.key.326440d4: {Name:mk044fdf2028b717f98d45397bb714a0dc112d01 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:43.532228   19483 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.crt.326440d4 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.crt
	I0919 22:14:43.532346   19483 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.key.326440d4 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.key
	I0919 22:14:43.532431   19483 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/proxy-client.key
	I0919 22:14:43.532457   19483 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/proxy-client.crt with IP's: []
	I0919 22:14:43.827700   19483 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/proxy-client.crt ...
	I0919 22:14:43.827736   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/proxy-client.crt: {Name:mk09a946711a607dc90b0f6d8d751398cbbf7a52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:43.827924   19483 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/proxy-client.key ...
	I0919 22:14:43.827950   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/proxy-client.key: {Name:mkda318df1bf7bee2461130f0fad62b99890555d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:43.828165   19483 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:14:43.828219   19483 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:14:43.828257   19483 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:14:43.828292   19483 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:14:43.828863   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:14:43.854378   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:14:43.878844   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:14:43.903378   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:14:43.927504   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0919 22:14:43.952641   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0919 22:14:43.976671   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:14:44.001652   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0919 22:14:44.026367   19483 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:14:44.055754   19483 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:14:44.074634   19483 ssh_runner.go:195] Run: openssl version
	I0919 22:14:44.080669   19483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:14:44.094442   19483 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:14:44.098267   19483 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:14:44.098331   19483 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:14:44.105421   19483 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:14:44.115310   19483 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:14:44.118944   19483 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:14:44.118996   19483 kubeadm.go:392] StartCluster: {Name:addons-120954 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:addons-120954 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSD
omain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnet
Path: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:14:44.119083   19483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:14:44.119148   19483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:14:44.155595   19483 cri.go:89] found id: ""
	I0919 22:14:44.155670   19483 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:14:44.165779   19483 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:14:44.176172   19483 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:14:44.176229   19483 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:14:44.186119   19483 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:14:44.186144   19483 kubeadm.go:157] found existing configuration files:
	
	I0919 22:14:44.186187   19483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:14:44.195486   19483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:14:44.195536   19483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:14:44.204812   19483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:14:44.214080   19483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:14:44.214153   19483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:14:44.224909   19483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:14:44.234881   19483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:14:44.234937   19483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:14:44.244606   19483 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:14:44.254001   19483 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:14:44.254056   19483 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:14:44.262755   19483 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:14:44.300402   19483 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:14:44.300481   19483 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:14:44.317135   19483 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:14:44.317216   19483 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:14:44.317250   19483 kubeadm.go:310] OS: Linux
	I0919 22:14:44.317294   19483 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:14:44.317345   19483 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:14:44.317433   19483 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:14:44.317518   19483 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:14:44.317565   19483 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:14:44.317617   19483 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:14:44.317660   19483 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:14:44.317723   19483 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:14:44.368550   19483 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:14:44.368657   19483 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:14:44.368812   19483 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:14:44.375855   19483 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:14:44.378780   19483 out.go:252]   - Generating certificates and keys ...
	I0919 22:14:44.378886   19483 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:14:44.378975   19483 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:14:44.552251   19483 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:14:44.985595   19483 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:14:45.041959   19483 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:14:45.413285   19483 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:14:45.622798   19483 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:14:45.622946   19483 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-120954 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:14:46.124137   19483 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:14:46.124324   19483 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-120954 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:14:46.261490   19483 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:14:46.550878   19483 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:14:46.751693   19483 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:14:46.751776   19483 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:14:46.891918   19483 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:14:47.135802   19483 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:14:47.479391   19483 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:14:47.620743   19483 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:14:47.870687   19483 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:14:47.871031   19483 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:14:47.875019   19483 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:14:47.876474   19483 out.go:252]   - Booting up control plane ...
	I0919 22:14:47.876559   19483 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:14:47.876621   19483 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:14:47.877269   19483 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:14:47.898299   19483 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:14:47.898593   19483 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:14:47.905846   19483 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:14:47.906288   19483 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:14:47.906356   19483 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:14:47.982989   19483 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:14:47.983208   19483 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:14:48.984739   19483 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001873459s
	I0919 22:14:48.987747   19483 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:14:48.987838   19483 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:14:48.987921   19483 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:14:48.987993   19483 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:14:50.100724   19483 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.112951566s
	I0919 22:14:50.636034   19483 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 1.64833606s
	I0919 22:14:52.489698   19483 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 3.50199304s
	I0919 22:14:52.500623   19483 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:14:52.512294   19483 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:14:52.523588   19483 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:14:52.523867   19483 kubeadm.go:310] [mark-control-plane] Marking the node addons-120954 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:14:52.533147   19483 kubeadm.go:310] [bootstrap-token] Using token: rf1nzk.7mywpoa6gobimv12
	I0919 22:14:52.536022   19483 out.go:252]   - Configuring RBAC rules ...
	I0919 22:14:52.536208   19483 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:14:52.538443   19483 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:14:52.546214   19483 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:14:52.549330   19483 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:14:52.552693   19483 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:14:52.555770   19483 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:14:52.895458   19483 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:14:53.310061   19483 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:14:53.895666   19483 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:14:53.896699   19483 kubeadm.go:310] 
	I0919 22:14:53.896781   19483 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:14:53.896793   19483 kubeadm.go:310] 
	I0919 22:14:53.896875   19483 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:14:53.896883   19483 kubeadm.go:310] 
	I0919 22:14:53.896926   19483 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:14:53.897016   19483 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:14:53.897092   19483 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:14:53.897124   19483 kubeadm.go:310] 
	I0919 22:14:53.897214   19483 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:14:53.897236   19483 kubeadm.go:310] 
	I0919 22:14:53.897320   19483 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:14:53.897349   19483 kubeadm.go:310] 
	I0919 22:14:53.897435   19483 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:14:53.897562   19483 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:14:53.897656   19483 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:14:53.897666   19483 kubeadm.go:310] 
	I0919 22:14:53.897777   19483 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:14:53.897884   19483 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:14:53.897897   19483 kubeadm.go:310] 
	I0919 22:14:53.898003   19483 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token rf1nzk.7mywpoa6gobimv12 \
	I0919 22:14:53.898187   19483 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 \
	I0919 22:14:53.898251   19483 kubeadm.go:310] 	--control-plane 
	I0919 22:14:53.898268   19483 kubeadm.go:310] 
	I0919 22:14:53.898378   19483 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:14:53.898387   19483 kubeadm.go:310] 
	I0919 22:14:53.898498   19483 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token rf1nzk.7mywpoa6gobimv12 \
	I0919 22:14:53.898623   19483 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 
	I0919 22:14:53.900614   19483 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:14:53.900786   19483 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:14:53.900808   19483 cni.go:84] Creating CNI manager for ""
	I0919 22:14:53.900817   19483 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 22:14:53.903399   19483 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:14:53.904688   19483 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:14:53.908853   19483 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:14:53.908870   19483 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:14:53.930229   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:14:54.150540   19483 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:14:54.150684   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:14:54.150684   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-120954 minikube.k8s.io/updated_at=2025_09_19T22_14_54_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=addons-120954 minikube.k8s.io/primary=true
	I0919 22:14:54.160326   19483 ops.go:34] apiserver oom_adj: -16
	I0919 22:14:54.241030   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:14:54.741202   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:14:55.242185   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:14:55.741213   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:14:56.242015   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:14:56.741950   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:14:57.241405   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:14:57.741977   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:14:58.241765   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:14:58.741416   19483 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:14:58.806973   19483 kubeadm.go:1105] duration metric: took 4.656344527s to wait for elevateKubeSystemPrivileges
	I0919 22:14:58.807010   19483 kubeadm.go:394] duration metric: took 14.688017115s to StartCluster
	I0919 22:14:58.807034   19483 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:58.807190   19483 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:14:58.807620   19483 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:58.807822   19483 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:14:58.807866   19483 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:14:58.807900   19483 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0919 22:14:58.808043   19483 addons.go:69] Setting yakd=true in profile "addons-120954"
	I0919 22:14:58.808052   19483 addons.go:69] Setting inspektor-gadget=true in profile "addons-120954"
	I0919 22:14:58.808066   19483 addons.go:238] Setting addon yakd=true in "addons-120954"
	I0919 22:14:58.808072   19483 addons.go:238] Setting addon inspektor-gadget=true in "addons-120954"
	I0919 22:14:58.808092   19483 config.go:182] Loaded profile config "addons-120954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:14:58.808115   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.808126   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.808120   19483 addons.go:69] Setting default-storageclass=true in profile "addons-120954"
	I0919 22:14:58.808142   19483 addons.go:69] Setting cloud-spanner=true in profile "addons-120954"
	I0919 22:14:58.808154   19483 addons.go:238] Setting addon cloud-spanner=true in "addons-120954"
	I0919 22:14:58.808173   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.808170   19483 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-120954"
	I0919 22:14:58.808193   19483 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-120954"
	I0919 22:14:58.808323   19483 addons.go:238] Setting addon csi-hostpath-driver=true in "addons-120954"
	I0919 22:14:58.808365   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.808416   19483 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-120954"
	I0919 22:14:58.808441   19483 addons.go:238] Setting addon nvidia-device-plugin=true in "addons-120954"
	I0919 22:14:58.808476   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.808571   19483 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-120954"
	I0919 22:14:58.808638   19483 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-120954"
	I0919 22:14:58.808646   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.808649   19483 addons.go:69] Setting registry=true in profile "addons-120954"
	I0919 22:14:58.808662   19483 addons.go:238] Setting addon registry=true in "addons-120954"
	I0919 22:14:58.808685   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.808720   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.808726   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.808829   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.808910   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.808926   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.808990   19483 addons.go:69] Setting registry-creds=true in profile "addons-120954"
	I0919 22:14:58.809042   19483 addons.go:238] Setting addon registry-creds=true in "addons-120954"
	I0919 22:14:58.809094   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.809131   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.809416   19483 addons.go:69] Setting metrics-server=true in profile "addons-120954"
	I0919 22:14:58.809443   19483 addons.go:238] Setting addon metrics-server=true in "addons-120954"
	I0919 22:14:58.809465   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.809915   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.810283   19483 addons.go:69] Setting volcano=true in profile "addons-120954"
	I0919 22:14:58.810306   19483 addons.go:238] Setting addon volcano=true in "addons-120954"
	I0919 22:14:58.810329   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.810329   19483 addons.go:69] Setting volumesnapshots=true in profile "addons-120954"
	I0919 22:14:58.810350   19483 addons.go:238] Setting addon volumesnapshots=true in "addons-120954"
	I0919 22:14:58.810376   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.810407   19483 out.go:179] * Verifying Kubernetes components...
	I0919 22:14:58.808573   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.808584   19483 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-120954"
	I0919 22:14:58.810733   19483 addons.go:238] Setting addon amd-gpu-device-plugin=true in "addons-120954"
	I0919 22:14:58.810768   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.808592   19483 addons.go:69] Setting ingress=true in profile "addons-120954"
	I0919 22:14:58.810994   19483 addons.go:238] Setting addon ingress=true in "addons-120954"
	I0919 22:14:58.811033   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.808596   19483 addons.go:69] Setting gcp-auth=true in profile "addons-120954"
	I0919 22:14:58.811233   19483 mustload.go:65] Loading cluster: addons-120954
	I0919 22:14:58.808608   19483 addons.go:69] Setting storage-provisioner=true in profile "addons-120954"
	I0919 22:14:58.811366   19483 addons.go:238] Setting addon storage-provisioner=true in "addons-120954"
	I0919 22:14:58.811399   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.808615   19483 addons.go:69] Setting ingress-dns=true in profile "addons-120954"
	I0919 22:14:58.811702   19483 addons.go:238] Setting addon ingress-dns=true in "addons-120954"
	I0919 22:14:58.811744   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.812364   19483 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:14:58.823240   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.826195   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.828960   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.829713   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.832037   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.841510   19483 config.go:182] Loaded profile config "addons-120954": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:14:58.842535   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.843691   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.852362   19483 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.44.1
	I0919 22:14:58.853978   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.854289   19483 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.41
	I0919 22:14:58.854647   19483 addons.go:435] installing /etc/kubernetes/addons/ig-crd.yaml
	I0919 22:14:58.854665   19483 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (14 bytes)
	I0919 22:14:58.854742   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.857723   19483 addons.go:435] installing /etc/kubernetes/addons/deployment.yaml
	I0919 22:14:58.857748   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0919 22:14:58.857806   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.871041   19483 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0919 22:14:58.872756   19483 addons.go:435] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0919 22:14:58.872783   19483 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0919 22:14:58.872884   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.887418   19483 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I0919 22:14:58.889207   19483 addons.go:435] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 22:14:58.889232   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I0919 22:14:58.889308   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.891622   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.899328   19483 addons.go:238] Setting addon default-storageclass=true in "addons-120954"
	I0919 22:14:58.899377   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.899857   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.906485   19483 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0919 22:14:58.906505   19483 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I0919 22:14:58.906810   19483 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I0919 22:14:58.908560   19483 addons.go:435] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I0919 22:14:58.908578   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I0919 22:14:58.908638   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.908846   19483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0919 22:14:58.908860   19483 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0919 22:14:58.908907   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.909406   19483 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 22:14:58.909419   19483 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 22:14:58.909464   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.909553   19483 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I0919 22:14:58.911937   19483 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0919 22:14:58.911961   19483 out.go:179]   - Using image docker.io/registry:3.0.0
	I0919 22:14:58.914706   19483 addons.go:435] installing /etc/kubernetes/addons/registry-rc.yaml
	I0919 22:14:58.914729   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0919 22:14:58.914795   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.916937   19483 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0919 22:14:58.920810   19483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0919 22:14:58.923295   19483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0919 22:14:58.925830   19483 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0919 22:14:58.927381   19483 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0919 22:14:58.927509   19483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0919 22:14:58.929039   19483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0919 22:14:58.930419   19483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0919 22:14:58.930467   19483 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.2
	I0919 22:14:58.932320   19483 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0919 22:14:58.933322   19483 addons.go:435] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 22:14:58.933340   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0919 22:14:58.933474   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.935382   19483 addons.go:435] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0919 22:14:58.935402   19483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0919 22:14:58.935458   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.949808   19483 addons.go:238] Setting addon storage-provisioner-rancher=true in "addons-120954"
	I0919 22:14:58.949858   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:14:58.950373   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:14:58.953295   19483 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:14:58.953316   19483 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:14:58.953390   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	W0919 22:14:58.963530   19483 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0919 22:14:58.964127   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:58.972400   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:58.973006   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:58.974803   19483 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:14:58.976552   19483 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:14:58.976709   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:14:58.976836   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.978716   19483 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.3
	I0919 22:14:58.987013   19483 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I0919 22:14:58.987218   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:58.987554   19483 addons.go:435] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 22:14:58.987570   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0919 22:14:58.987625   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.988738   19483 addons.go:435] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0919 22:14:58.988760   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I0919 22:14:58.988812   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:58.998178   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:58.998621   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:59.002992   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:59.010094   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:59.013506   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:59.020212   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:59.024225   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:59.025191   19483 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	W0919 22:14:59.025538   19483 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0919 22:14:59.025588   19483 retry.go:31] will retry after 349.122706ms: ssh: handshake failed: EOF
	I0919 22:14:59.029015   19483 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:14:59.029680   19483 out.go:179]   - Using image docker.io/busybox:stable
	I0919 22:14:59.035485   19483 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 22:14:59.035513   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0919 22:14:59.035588   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:14:59.037992   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:59.040111   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:59.040234   19483 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:14:59.043961   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:59.064826   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:14:59.143310   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0919 22:14:59.161565   19483 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0919 22:14:59.161593   19483 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0919 22:14:59.186345   19483 addons.go:435] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0919 22:14:59.186370   19483 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0919 22:14:59.189496   19483 addons.go:435] installing /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:14:59.189518   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I0919 22:14:59.193840   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0919 22:14:59.195189   19483 addons.go:435] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0919 22:14:59.195259   19483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0919 22:14:59.218402   19483 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 22:14:59.218425   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0919 22:14:59.219080   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I0919 22:14:59.232890   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0919 22:14:59.236454   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I0919 22:14:59.236892   19483 addons.go:435] installing /etc/kubernetes/addons/registry-svc.yaml
	I0919 22:14:59.236901   19483 addons.go:435] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0919 22:14:59.236911   19483 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0919 22:14:59.236985   19483 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0919 22:14:59.245057   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0919 22:14:59.253318   19483 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0919 22:14:59.253351   19483 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0919 22:14:59.253694   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:14:59.257546   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:14:59.267363   19483 addons.go:435] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0919 22:14:59.267391   19483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0919 22:14:59.279666   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0919 22:14:59.282247   19483 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 22:14:59.282277   19483 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 22:14:59.333367   19483 addons.go:435] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0919 22:14:59.333398   19483 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0919 22:14:59.346666   19483 addons.go:435] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0919 22:14:59.346698   19483 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0919 22:14:59.352375   19483 addons.go:435] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0919 22:14:59.352401   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0919 22:14:59.353097   19483 addons.go:435] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0919 22:14:59.353133   19483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0919 22:14:59.362162   19483 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 22:14:59.362203   19483 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 22:14:59.419540   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 22:14:59.431139   19483 addons.go:435] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0919 22:14:59.431184   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0919 22:14:59.431952   19483 addons.go:435] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0919 22:14:59.431973   19483 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0919 22:14:59.453201   19483 addons.go:435] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0919 22:14:59.453250   19483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0919 22:14:59.457082   19483 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:14:59.459000   19483 node_ready.go:35] waiting up to 6m0s for node "addons-120954" to be "Ready" ...
	I0919 22:14:59.459684   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0919 22:14:59.485061   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0919 22:14:59.498212   19483 addons.go:435] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 22:14:59.498247   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0919 22:14:59.518688   19483 addons.go:435] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0919 22:14:59.518717   19483 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0919 22:14:59.545685   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0919 22:14:59.566200   19483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0919 22:14:59.566229   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0919 22:14:59.590677   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:14:59.649806   19483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0919 22:14:59.649831   19483 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0919 22:14:59.754854   19483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0919 22:14:59.754878   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0919 22:14:59.820608   19483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0919 22:14:59.820694   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0919 22:14:59.904597   19483 addons.go:435] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 22:14:59.904631   19483 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0919 22:14:59.953204   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0919 22:14:59.966876   19483 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-120954" context rescaled to 1 replicas
	I0919 22:15:00.495176   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.262243519s)
	I0919 22:15:00.495308   19483 addons.go:479] Verifying addon ingress=true in "addons-120954"
	I0919 22:15:00.495331   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (1.241612143s)
	W0919 22:15:00.495367   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:00.495237   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.250135004s)
	I0919 22:15:00.495455   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.237879549s)
	I0919 22:15:00.495516   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.215826737s)
	I0919 22:15:00.495669   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.076091665s)
	I0919 22:15:00.495691   19483 addons.go:479] Verifying addon metrics-server=true in "addons-120954"
	I0919 22:15:00.495735   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.036017808s)
	I0919 22:15:00.495759   19483 addons.go:479] Verifying addon registry=true in "addons-120954"
	I0919 22:15:00.495393   19483 retry.go:31] will retry after 139.793054ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget created
	serviceaccount/gadget created
	configmap/gadget created
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role created
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding created
	role.rbac.authorization.k8s.io/gadget-role created
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding created
	daemonset.apps/gadget created
	
	stderr:
	Warning: spec.template.metadata.annotations[container.apparmor.security.beta.kubernetes.io/gadget]: deprecated since v1.30; use the "appArmorProfile" field instead
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:00.495804   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (1.010702271s)
	I0919 22:15:00.497488   19483 out.go:179] * Verifying ingress addon...
	I0919 22:15:00.498823   19483 out.go:179] * Verifying registry addon...
	I0919 22:15:00.499047   19483 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-120954 service yakd-dashboard -n yakd-dashboard
	
	I0919 22:15:00.500655   19483 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0919 22:15:00.502552   19483 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0919 22:15:00.506275   19483 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 22:15:00.506302   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:00.506325   19483 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0919 22:15:00.506337   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:00.636779   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:01.005444   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:01.033682   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.487945973s)
	W0919 22:15:01.033742   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 22:15:01.033768   19483 retry.go:31] will retry after 313.766373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	Warning: unrecognized format "int64"
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0919 22:15:01.033808   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.443097224s)
	I0919 22:15:01.034238   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.080991869s)
	I0919 22:15:01.034266   19483 addons.go:479] Verifying addon csi-hostpath-driver=true in "addons-120954"
	I0919 22:15:01.038191   19483 out.go:179] * Verifying csi-hostpath-driver addon...
	I0919 22:15:01.040373   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:01.041037   19483 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0919 22:15:01.043758   19483 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 22:15:01.043784   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0919 22:15:01.302552   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:01.302588   19483 retry.go:31] will retry after 310.501364ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:01.348082   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	W0919 22:15:01.462197   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:01.504325   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:01.506944   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:01.544839   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:01.613252   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:02.005761   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:02.006004   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:02.044535   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:02.504304   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:02.506022   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:02.544436   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:03.004546   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:03.005826   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:03.044591   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0919 22:15:03.462263   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:03.504565   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:03.504596   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:03.544242   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:03.840309   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.492157509s)
	I0919 22:15:03.840380   19483 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (2.227084861s)
	W0919 22:15:03.840413   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:03.840430   19483 retry.go:31] will retry after 682.922583ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:04.005053   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:04.005632   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:04.044247   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:04.504095   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:04.506340   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:04.524398   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:04.544595   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:05.005126   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:05.006028   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:05.044214   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0919 22:15:05.083583   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:05.083617   19483 retry.go:31] will retry after 667.130446ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:05.503711   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:05.505025   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:05.544767   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:05.750916   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0919 22:15:05.961819   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:06.004522   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:06.005526   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:06.044292   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0919 22:15:06.312854   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:06.312899   19483 retry.go:31] will retry after 1.500769224s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:06.500831   19483 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0919 22:15:06.500906   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:15:06.503966   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:06.505224   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:06.519330   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:15:06.543906   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:06.625962   19483 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0919 22:15:06.645171   19483 addons.go:238] Setting addon gcp-auth=true in "addons-120954"
	I0919 22:15:06.645220   19483 host.go:66] Checking if "addons-120954" exists ...
	I0919 22:15:06.645584   19483 cli_runner.go:164] Run: docker container inspect addons-120954 --format={{.State.Status}}
	I0919 22:15:06.663614   19483 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0919 22:15:06.663670   19483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-120954
	I0919 22:15:06.681953   19483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/addons-120954/id_rsa Username:docker}
	I0919 22:15:06.776641   19483 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.2
	I0919 22:15:06.778502   19483 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I0919 22:15:06.779820   19483 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0919 22:15:06.779833   19483 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0919 22:15:06.800445   19483 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0919 22:15:06.800483   19483 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0919 22:15:06.820449   19483 addons.go:435] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 22:15:06.820470   19483 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0919 22:15:06.840358   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0919 22:15:07.004136   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:07.005715   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:07.044592   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:07.160795   19483 addons.go:479] Verifying addon gcp-auth=true in "addons-120954"
	I0919 22:15:07.162184   19483 out.go:179] * Verifying gcp-auth addon...
	I0919 22:15:07.164407   19483 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0919 22:15:07.166973   19483 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0919 22:15:07.166993   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:07.504453   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:07.504889   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:07.544518   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:07.667043   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:07.814262   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0919 22:15:07.962405   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:08.004725   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:08.004861   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:08.044833   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:08.168075   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:08.367391   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:08.367423   19483 retry.go:31] will retry after 2.089346696s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:08.504194   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:08.505743   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:08.544416   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:08.668219   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:09.003950   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:09.006267   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:09.043692   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:09.167182   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:09.503264   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:09.505018   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:09.544728   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:09.667454   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:10.004329   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:10.004405   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:10.043962   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:10.168072   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:10.457399   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0919 22:15:10.461783   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:10.503462   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:10.504931   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:10.544419   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:10.668261   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:11.004448   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:11.004467   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:11.004480   19483 retry.go:31] will retry after 2.80447815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:11.004785   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:11.044384   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:11.167989   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:11.504493   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:11.504662   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:11.544194   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:11.667731   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:12.004310   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:12.004593   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:12.043987   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:12.167855   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:12.462463   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:12.504611   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:12.504763   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:12.544461   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:12.666869   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:13.004348   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:13.004559   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:13.044257   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:13.168425   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:13.503821   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:13.505090   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:13.543850   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:13.667380   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:13.809703   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:14.004578   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:14.004627   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:14.044098   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:14.168303   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:14.346344   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:14.346453   19483 retry.go:31] will retry after 2.723209075s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:14.504790   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:14.506446   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:14.543878   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:14.667456   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:14.962028   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:15.003628   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:15.004978   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:15.043569   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:15.167021   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:15.504499   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:15.504621   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:15.544324   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:15.667802   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:16.004290   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:16.005692   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:16.044117   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:16.167840   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:16.504624   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:16.504772   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:16.544335   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:16.667611   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:16.962193   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:17.003487   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:17.004903   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:17.044333   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:17.070514   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:17.168347   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:17.504597   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:17.504668   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:17.545383   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	W0919 22:15:17.608442   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:17.608468   19483 retry.go:31] will retry after 9.247909858s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:17.667305   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:18.004288   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:18.004815   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:18.044661   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:18.167573   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:18.503647   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:18.505260   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:18.543712   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:18.668355   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:18.962768   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:19.003367   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:19.005210   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:19.043758   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:19.167702   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:19.504389   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:19.504930   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:19.544306   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:19.667910   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:20.004162   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:20.004646   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:20.044141   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:20.168224   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:20.503920   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:20.505628   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:20.544274   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:20.667767   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:20.962953   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:21.004072   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:21.005550   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:21.044381   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:21.168150   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:21.504605   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:21.504703   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:21.544182   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:21.667762   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:22.004397   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:22.004712   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:22.044214   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:22.167882   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:22.504983   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:22.505003   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:22.544639   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:22.667199   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:23.003665   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:23.005024   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:23.044483   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:23.167037   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:23.461470   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:23.504301   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:23.504570   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:23.544186   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:23.667547   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:24.003954   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:24.004420   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:24.044193   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:24.168124   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:24.504374   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:24.504686   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:24.544250   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:24.667826   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:25.004213   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:25.004436   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:25.044096   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:25.167527   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:25.462403   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:25.504119   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:25.505524   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:25.544248   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:25.667630   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:26.004384   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:26.006261   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:26.043640   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:26.167525   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:26.503726   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:26.505274   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:26.543824   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:26.667392   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:26.856571   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:27.004321   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:27.004552   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:27.045964   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:27.167711   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:27.407819   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:27.407845   19483 retry.go:31] will retry after 5.979281705s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:27.504065   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:27.505914   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:27.544530   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:27.666552   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:27.962224   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:28.004015   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:28.005645   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:28.044279   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:28.168040   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:28.503804   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:28.505289   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:28.543773   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:28.668700   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:29.003608   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:29.005443   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:29.044249   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:29.167890   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:29.503667   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:29.505063   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:29.543730   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:29.667228   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:30.003936   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:30.005790   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:30.044753   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:30.167258   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:30.461680   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:30.504910   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:30.505086   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:30.544670   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:30.667217   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:31.004553   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:31.006404   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:31.044276   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:31.168003   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:31.503811   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:31.505506   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:31.544465   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:31.667813   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:32.004784   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:32.005282   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:32.044055   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:32.167795   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:32.462541   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:32.504778   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:32.505063   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:32.544687   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:32.667148   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:33.004694   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:33.005032   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:33.043708   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:33.167644   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:33.387951   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:33.504290   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:33.505096   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:33.544830   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:33.667743   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:33.938378   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:33.938414   19483 retry.go:31] will retry after 10.069389859s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:34.003675   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:34.006020   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:34.044568   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:34.167682   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:34.504429   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:34.505022   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:34.544695   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:34.667847   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:34.963036   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:35.003901   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:35.005765   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:35.044276   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:35.167942   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:35.504186   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:35.504676   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:35.544282   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:35.667725   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:36.004066   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:36.005846   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:36.044537   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:36.167136   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:36.503981   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:36.506158   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:36.543602   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:36.667351   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:37.003564   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:37.005215   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:37.043691   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:37.167386   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:37.462155   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:37.503857   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:37.505537   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:37.543831   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:37.667396   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:38.004202   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:38.005989   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:38.044370   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:38.168350   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:38.504024   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:38.505756   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:38.544292   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:38.668567   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:39.003803   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:39.006296   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:39.044406   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:39.167332   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:39.462715   19483 node_ready.go:57] node "addons-120954" has "Ready":"False" status (will retry)
	I0919 22:15:39.504593   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:39.504953   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:39.544522   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:39.666978   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:40.003651   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:40.005470   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:40.043910   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:40.167570   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:40.462235   19483 node_ready.go:49] node "addons-120954" is "Ready"
	I0919 22:15:40.462265   19483 node_ready.go:38] duration metric: took 41.0032306s for node "addons-120954" to be "Ready" ...
	I0919 22:15:40.462280   19483 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:15:40.462330   19483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:15:40.477768   19483 api_server.go:72] duration metric: took 41.669868891s to wait for apiserver process to appear ...
	I0919 22:15:40.477799   19483 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:15:40.477821   19483 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:15:40.482164   19483 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:15:40.483204   19483 api_server.go:141] control plane version: v1.34.0
	I0919 22:15:40.483232   19483 api_server.go:131] duration metric: took 5.425112ms to wait for apiserver health ...
	I0919 22:15:40.483242   19483 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:15:40.490096   19483 system_pods.go:59] 20 kube-system pods found
	I0919 22:15:40.490165   19483 system_pods.go:61] "amd-gpu-device-plugin-s454z" [03feb907-e169-41cb-af89-34fab99054dd] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0919 22:15:40.490179   19483 system_pods.go:61] "coredns-66bc5c9577-c8ncc" [abedc204-75fe-46af-8882-85f2ee09c5e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:15:40.490187   19483 system_pods.go:61] "csi-hostpath-attacher-0" [ea5638dc-d06c-4735-ab40-2da5f58c91f9] Pending
	I0919 22:15:40.490196   19483 system_pods.go:61] "csi-hostpath-resizer-0" [596d07bf-2d9b-4279-8c13-1c530cfed969] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 22:15:40.490202   19483 system_pods.go:61] "csi-hostpathplugin-kqsvw" [90ef02f0-bf4f-410d-a3bb-2dea0f0f30d4] Pending
	I0919 22:15:40.490208   19483 system_pods.go:61] "etcd-addons-120954" [368d4d94-f56a-4d5c-b555-2d182bc2c13c] Running
	I0919 22:15:40.490213   19483 system_pods.go:61] "kindnet-8tkc8" [30337f08-8000-4aeb-9124-53237d4322ab] Running
	I0919 22:15:40.490218   19483 system_pods.go:61] "kube-apiserver-addons-120954" [96c6db34-a08d-46a6-b200-6c0ef9b04176] Running
	I0919 22:15:40.490223   19483 system_pods.go:61] "kube-controller-manager-addons-120954" [b9fa85ca-6b19-4d0e-af1b-b537988a5937] Running
	I0919 22:15:40.490232   19483 system_pods.go:61] "kube-ingress-dns-minikube" [8d047756-aa17-4c00-95c6-70e1fc3336fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 22:15:40.490243   19483 system_pods.go:61] "kube-proxy-cvw9l" [9b53e223-4882-4160-90e0-a696f562a007] Running
	I0919 22:15:40.490249   19483 system_pods.go:61] "kube-scheduler-addons-120954" [182e65bf-60e6-4b55-850c-1a9fa2efd82b] Running
	I0919 22:15:40.490257   19483 system_pods.go:61] "metrics-server-85b7d694d7-jkm77" [eb1ab5e8-c236-46d0-909d-5ecf7244e6da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 22:15:40.490265   19483 system_pods.go:61] "nvidia-device-plugin-daemonset-8d8t7" [6fcc270d-33df-4191-bb2c-39cd16df1785] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0919 22:15:40.490273   19483 system_pods.go:61] "registry-66898fdd98-pqdlc" [c19f5c5c-2819-451f-b495-8a22b5069243] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 22:15:40.490282   19483 system_pods.go:61] "registry-creds-764b6fb674-z7tnf" [9e3508a8-a24b-4fed-a7a5-14ac8130a58f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0919 22:15:40.490289   19483 system_pods.go:61] "registry-proxy-2vkz7" [f689ed57-4298-4236-b617-6dc01230bcb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 22:15:40.490297   19483 system_pods.go:61] "snapshot-controller-7d9fbc56b8-gcn75" [ae07922c-b911-4895-9202-b690a79195bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:40.490302   19483 system_pods.go:61] "snapshot-controller-7d9fbc56b8-j6s5g" [769612bd-b086-41f9-9223-70c4963110bd] Pending
	I0919 22:15:40.490309   19483 system_pods.go:61] "storage-provisioner" [3c2c8204-2a1d-455b-9315-fc2539d7162f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:15:40.490316   19483 system_pods.go:74] duration metric: took 7.067592ms to wait for pod list to return data ...
	I0919 22:15:40.490327   19483 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:15:40.495307   19483 default_sa.go:45] found service account: "default"
	I0919 22:15:40.495355   19483 default_sa.go:55] duration metric: took 5.020304ms for default service account to be created ...
	I0919 22:15:40.495367   19483 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:15:40.501927   19483 system_pods.go:86] 20 kube-system pods found
	I0919 22:15:40.501965   19483 system_pods.go:89] "amd-gpu-device-plugin-s454z" [03feb907-e169-41cb-af89-34fab99054dd] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0919 22:15:40.501976   19483 system_pods.go:89] "coredns-66bc5c9577-c8ncc" [abedc204-75fe-46af-8882-85f2ee09c5e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:15:40.501985   19483 system_pods.go:89] "csi-hostpath-attacher-0" [ea5638dc-d06c-4735-ab40-2da5f58c91f9] Pending
	I0919 22:15:40.501994   19483 system_pods.go:89] "csi-hostpath-resizer-0" [596d07bf-2d9b-4279-8c13-1c530cfed969] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 22:15:40.502004   19483 system_pods.go:89] "csi-hostpathplugin-kqsvw" [90ef02f0-bf4f-410d-a3bb-2dea0f0f30d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 22:15:40.502012   19483 system_pods.go:89] "etcd-addons-120954" [368d4d94-f56a-4d5c-b555-2d182bc2c13c] Running
	I0919 22:15:40.502019   19483 system_pods.go:89] "kindnet-8tkc8" [30337f08-8000-4aeb-9124-53237d4322ab] Running
	I0919 22:15:40.502025   19483 system_pods.go:89] "kube-apiserver-addons-120954" [96c6db34-a08d-46a6-b200-6c0ef9b04176] Running
	I0919 22:15:40.502031   19483 system_pods.go:89] "kube-controller-manager-addons-120954" [b9fa85ca-6b19-4d0e-af1b-b537988a5937] Running
	I0919 22:15:40.502040   19483 system_pods.go:89] "kube-ingress-dns-minikube" [8d047756-aa17-4c00-95c6-70e1fc3336fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 22:15:40.502046   19483 system_pods.go:89] "kube-proxy-cvw9l" [9b53e223-4882-4160-90e0-a696f562a007] Running
	I0919 22:15:40.502052   19483 system_pods.go:89] "kube-scheduler-addons-120954" [182e65bf-60e6-4b55-850c-1a9fa2efd82b] Running
	I0919 22:15:40.502062   19483 system_pods.go:89] "metrics-server-85b7d694d7-jkm77" [eb1ab5e8-c236-46d0-909d-5ecf7244e6da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 22:15:40.502070   19483 system_pods.go:89] "nvidia-device-plugin-daemonset-8d8t7" [6fcc270d-33df-4191-bb2c-39cd16df1785] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0919 22:15:40.502080   19483 system_pods.go:89] "registry-66898fdd98-pqdlc" [c19f5c5c-2819-451f-b495-8a22b5069243] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 22:15:40.502088   19483 system_pods.go:89] "registry-creds-764b6fb674-z7tnf" [9e3508a8-a24b-4fed-a7a5-14ac8130a58f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0919 22:15:40.502096   19483 system_pods.go:89] "registry-proxy-2vkz7" [f689ed57-4298-4236-b617-6dc01230bcb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 22:15:40.502220   19483 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gcn75" [ae07922c-b911-4895-9202-b690a79195bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:40.502242   19483 system_pods.go:89] "snapshot-controller-7d9fbc56b8-j6s5g" [769612bd-b086-41f9-9223-70c4963110bd] Pending
	I0919 22:15:40.502263   19483 system_pods.go:89] "storage-provisioner" [3c2c8204-2a1d-455b-9315-fc2539d7162f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:15:40.502294   19483 retry.go:31] will retry after 245.036237ms: missing components: kube-dns
	I0919 22:15:40.503531   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:40.505204   19483 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0919 22:15:40.505226   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:40.546058   19483 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0919 22:15:40.546079   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:40.668189   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:40.770724   19483 system_pods.go:86] 20 kube-system pods found
	I0919 22:15:40.770766   19483 system_pods.go:89] "amd-gpu-device-plugin-s454z" [03feb907-e169-41cb-af89-34fab99054dd] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0919 22:15:40.770778   19483 system_pods.go:89] "coredns-66bc5c9577-c8ncc" [abedc204-75fe-46af-8882-85f2ee09c5e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:15:40.770789   19483 system_pods.go:89] "csi-hostpath-attacher-0" [ea5638dc-d06c-4735-ab40-2da5f58c91f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 22:15:40.770798   19483 system_pods.go:89] "csi-hostpath-resizer-0" [596d07bf-2d9b-4279-8c13-1c530cfed969] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 22:15:40.770807   19483 system_pods.go:89] "csi-hostpathplugin-kqsvw" [90ef02f0-bf4f-410d-a3bb-2dea0f0f30d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 22:15:40.770817   19483 system_pods.go:89] "etcd-addons-120954" [368d4d94-f56a-4d5c-b555-2d182bc2c13c] Running
	I0919 22:15:40.770824   19483 system_pods.go:89] "kindnet-8tkc8" [30337f08-8000-4aeb-9124-53237d4322ab] Running
	I0919 22:15:40.770832   19483 system_pods.go:89] "kube-apiserver-addons-120954" [96c6db34-a08d-46a6-b200-6c0ef9b04176] Running
	I0919 22:15:40.770838   19483 system_pods.go:89] "kube-controller-manager-addons-120954" [b9fa85ca-6b19-4d0e-af1b-b537988a5937] Running
	I0919 22:15:40.770850   19483 system_pods.go:89] "kube-ingress-dns-minikube" [8d047756-aa17-4c00-95c6-70e1fc3336fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 22:15:40.770859   19483 system_pods.go:89] "kube-proxy-cvw9l" [9b53e223-4882-4160-90e0-a696f562a007] Running
	I0919 22:15:40.770865   19483 system_pods.go:89] "kube-scheduler-addons-120954" [182e65bf-60e6-4b55-850c-1a9fa2efd82b] Running
	I0919 22:15:40.770877   19483 system_pods.go:89] "metrics-server-85b7d694d7-jkm77" [eb1ab5e8-c236-46d0-909d-5ecf7244e6da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 22:15:40.770891   19483 system_pods.go:89] "nvidia-device-plugin-daemonset-8d8t7" [6fcc270d-33df-4191-bb2c-39cd16df1785] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0919 22:15:40.770902   19483 system_pods.go:89] "registry-66898fdd98-pqdlc" [c19f5c5c-2819-451f-b495-8a22b5069243] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 22:15:40.770914   19483 system_pods.go:89] "registry-creds-764b6fb674-z7tnf" [9e3508a8-a24b-4fed-a7a5-14ac8130a58f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0919 22:15:40.770921   19483 system_pods.go:89] "registry-proxy-2vkz7" [f689ed57-4298-4236-b617-6dc01230bcb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 22:15:40.770935   19483 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gcn75" [ae07922c-b911-4895-9202-b690a79195bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:40.770961   19483 system_pods.go:89] "snapshot-controller-7d9fbc56b8-j6s5g" [769612bd-b086-41f9-9223-70c4963110bd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:40.770969   19483 system_pods.go:89] "storage-provisioner" [3c2c8204-2a1d-455b-9315-fc2539d7162f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:15:40.770986   19483 retry.go:31] will retry after 247.688563ms: missing components: kube-dns
	I0919 22:15:41.005509   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:41.006637   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:41.106052   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:41.107587   19483 system_pods.go:86] 20 kube-system pods found
	I0919 22:15:41.107626   19483 system_pods.go:89] "amd-gpu-device-plugin-s454z" [03feb907-e169-41cb-af89-34fab99054dd] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0919 22:15:41.107638   19483 system_pods.go:89] "coredns-66bc5c9577-c8ncc" [abedc204-75fe-46af-8882-85f2ee09c5e0] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:15:41.107649   19483 system_pods.go:89] "csi-hostpath-attacher-0" [ea5638dc-d06c-4735-ab40-2da5f58c91f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 22:15:41.107658   19483 system_pods.go:89] "csi-hostpath-resizer-0" [596d07bf-2d9b-4279-8c13-1c530cfed969] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 22:15:41.107669   19483 system_pods.go:89] "csi-hostpathplugin-kqsvw" [90ef02f0-bf4f-410d-a3bb-2dea0f0f30d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 22:15:41.107679   19483 system_pods.go:89] "etcd-addons-120954" [368d4d94-f56a-4d5c-b555-2d182bc2c13c] Running
	I0919 22:15:41.107689   19483 system_pods.go:89] "kindnet-8tkc8" [30337f08-8000-4aeb-9124-53237d4322ab] Running
	I0919 22:15:41.107698   19483 system_pods.go:89] "kube-apiserver-addons-120954" [96c6db34-a08d-46a6-b200-6c0ef9b04176] Running
	I0919 22:15:41.107704   19483 system_pods.go:89] "kube-controller-manager-addons-120954" [b9fa85ca-6b19-4d0e-af1b-b537988a5937] Running
	I0919 22:15:41.107720   19483 system_pods.go:89] "kube-ingress-dns-minikube" [8d047756-aa17-4c00-95c6-70e1fc3336fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 22:15:41.107728   19483 system_pods.go:89] "kube-proxy-cvw9l" [9b53e223-4882-4160-90e0-a696f562a007] Running
	I0919 22:15:41.107735   19483 system_pods.go:89] "kube-scheduler-addons-120954" [182e65bf-60e6-4b55-850c-1a9fa2efd82b] Running
	I0919 22:15:41.107743   19483 system_pods.go:89] "metrics-server-85b7d694d7-jkm77" [eb1ab5e8-c236-46d0-909d-5ecf7244e6da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 22:15:41.107753   19483 system_pods.go:89] "nvidia-device-plugin-daemonset-8d8t7" [6fcc270d-33df-4191-bb2c-39cd16df1785] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0919 22:15:41.107763   19483 system_pods.go:89] "registry-66898fdd98-pqdlc" [c19f5c5c-2819-451f-b495-8a22b5069243] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 22:15:41.107779   19483 system_pods.go:89] "registry-creds-764b6fb674-z7tnf" [9e3508a8-a24b-4fed-a7a5-14ac8130a58f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0919 22:15:41.107791   19483 system_pods.go:89] "registry-proxy-2vkz7" [f689ed57-4298-4236-b617-6dc01230bcb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 22:15:41.107801   19483 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gcn75" [ae07922c-b911-4895-9202-b690a79195bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:41.107817   19483 system_pods.go:89] "snapshot-controller-7d9fbc56b8-j6s5g" [769612bd-b086-41f9-9223-70c4963110bd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:41.107829   19483 system_pods.go:89] "storage-provisioner" [3c2c8204-2a1d-455b-9315-fc2539d7162f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:15:41.107848   19483 retry.go:31] will retry after 470.374532ms: missing components: kube-dns
	I0919 22:15:41.167827   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:41.504769   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:41.509269   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:41.547011   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:41.583053   19483 system_pods.go:86] 20 kube-system pods found
	I0919 22:15:41.583096   19483 system_pods.go:89] "amd-gpu-device-plugin-s454z" [03feb907-e169-41cb-af89-34fab99054dd] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I0919 22:15:41.583117   19483 system_pods.go:89] "coredns-66bc5c9577-c8ncc" [abedc204-75fe-46af-8882-85f2ee09c5e0] Running
	I0919 22:15:41.583128   19483 system_pods.go:89] "csi-hostpath-attacher-0" [ea5638dc-d06c-4735-ab40-2da5f58c91f9] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0919 22:15:41.583139   19483 system_pods.go:89] "csi-hostpath-resizer-0" [596d07bf-2d9b-4279-8c13-1c530cfed969] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0919 22:15:41.583147   19483 system_pods.go:89] "csi-hostpathplugin-kqsvw" [90ef02f0-bf4f-410d-a3bb-2dea0f0f30d4] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0919 22:15:41.583153   19483 system_pods.go:89] "etcd-addons-120954" [368d4d94-f56a-4d5c-b555-2d182bc2c13c] Running
	I0919 22:15:41.583159   19483 system_pods.go:89] "kindnet-8tkc8" [30337f08-8000-4aeb-9124-53237d4322ab] Running
	I0919 22:15:41.583164   19483 system_pods.go:89] "kube-apiserver-addons-120954" [96c6db34-a08d-46a6-b200-6c0ef9b04176] Running
	I0919 22:15:41.583169   19483 system_pods.go:89] "kube-controller-manager-addons-120954" [b9fa85ca-6b19-4d0e-af1b-b537988a5937] Running
	I0919 22:15:41.583178   19483 system_pods.go:89] "kube-ingress-dns-minikube" [8d047756-aa17-4c00-95c6-70e1fc3336fe] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0919 22:15:41.583188   19483 system_pods.go:89] "kube-proxy-cvw9l" [9b53e223-4882-4160-90e0-a696f562a007] Running
	I0919 22:15:41.583194   19483 system_pods.go:89] "kube-scheduler-addons-120954" [182e65bf-60e6-4b55-850c-1a9fa2efd82b] Running
	I0919 22:15:41.583206   19483 system_pods.go:89] "metrics-server-85b7d694d7-jkm77" [eb1ab5e8-c236-46d0-909d-5ecf7244e6da] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 22:15:41.583214   19483 system_pods.go:89] "nvidia-device-plugin-daemonset-8d8t7" [6fcc270d-33df-4191-bb2c-39cd16df1785] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I0919 22:15:41.583224   19483 system_pods.go:89] "registry-66898fdd98-pqdlc" [c19f5c5c-2819-451f-b495-8a22b5069243] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0919 22:15:41.583232   19483 system_pods.go:89] "registry-creds-764b6fb674-z7tnf" [9e3508a8-a24b-4fed-a7a5-14ac8130a58f] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I0919 22:15:41.583240   19483 system_pods.go:89] "registry-proxy-2vkz7" [f689ed57-4298-4236-b617-6dc01230bcb8] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0919 22:15:41.583248   19483 system_pods.go:89] "snapshot-controller-7d9fbc56b8-gcn75" [ae07922c-b911-4895-9202-b690a79195bb] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:41.583259   19483 system_pods.go:89] "snapshot-controller-7d9fbc56b8-j6s5g" [769612bd-b086-41f9-9223-70c4963110bd] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0919 22:15:41.583272   19483 system_pods.go:89] "storage-provisioner" [3c2c8204-2a1d-455b-9315-fc2539d7162f] Running
	I0919 22:15:41.583285   19483 system_pods.go:126] duration metric: took 1.087910958s to wait for k8s-apps to be running ...
	I0919 22:15:41.583296   19483 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:15:41.583348   19483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:15:41.597757   19483 system_svc.go:56] duration metric: took 14.450914ms WaitForService to wait for kubelet
	I0919 22:15:41.597789   19483 kubeadm.go:578] duration metric: took 42.789893506s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:15:41.597820   19483 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:15:41.600620   19483 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:15:41.600650   19483 node_conditions.go:123] node cpu capacity is 8
	I0919 22:15:41.600666   19483 node_conditions.go:105] duration metric: took 2.839884ms to run NodePressure ...
	I0919 22:15:41.600680   19483 start.go:241] waiting for startup goroutines ...
	I0919 22:15:41.668001   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:42.008638   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:42.009187   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:42.044209   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:42.168284   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:42.505222   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:42.505339   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:42.544246   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:42.667892   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:43.004263   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:43.006109   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:43.044967   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:43.168172   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:43.504833   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:43.505888   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:43.546146   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:43.667341   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:44.004457   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:44.005963   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:44.007934   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:15:44.044956   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:44.168011   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:44.504850   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:44.505861   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:44.543938   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:44.667706   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	W0919 22:15:44.671916   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:44.671948   19483 retry.go:31] will retry after 18.913763425s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:15:45.004149   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:45.006000   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:45.044633   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:45.167729   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:45.503862   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:45.505730   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:45.545143   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:45.667475   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:46.004027   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:46.005915   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:46.044677   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:46.167877   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:46.504733   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:46.506187   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:46.544285   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:46.667781   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:47.003815   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:47.006037   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:47.045050   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:47.169216   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:47.504166   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:47.505709   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:47.544308   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:47.669896   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:48.005059   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:48.007732   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:48.044534   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:48.167379   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:48.503883   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:48.505338   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:48.544201   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:48.668660   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:49.004481   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:49.005556   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:49.105397   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:49.169276   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:49.504904   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:49.505253   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:49.544184   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:49.667614   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:50.005090   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:50.005161   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:50.043968   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:50.167421   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:50.504458   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:50.504940   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:50.544804   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:50.667234   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:51.004363   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:51.005844   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:51.044627   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:51.167556   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:51.504707   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:51.504988   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:51.544682   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:51.667903   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:52.004229   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:52.006349   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:52.044469   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:52.168215   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:52.504231   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:52.504955   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:52.544585   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:52.667591   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:53.003747   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:53.005614   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:53.044890   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:53.167893   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:53.504513   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:53.505751   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:53.544674   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:53.667567   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:54.004087   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:54.005681   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:54.044693   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:54.167534   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:54.503429   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:54.507000   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:54.544355   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:54.667785   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:55.004260   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:55.005650   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:55.044680   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:55.348528   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:55.504638   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:55.504911   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:55.544900   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:55.667431   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:56.004985   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:56.005014   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:56.044711   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:56.168244   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:56.505042   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:56.505090   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:56.544750   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:56.667245   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:57.003920   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:57.005842   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:57.044987   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:57.168171   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:57.504601   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:57.506203   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:57.544308   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:57.668197   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:58.003771   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:58.005663   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:58.044749   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:58.167193   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:58.504578   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:58.505784   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:58.544753   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:58.669003   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:59.004258   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:59.004830   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:59.045003   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:59.169417   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:15:59.504967   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:15:59.505016   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:15:59.606496   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:15:59.707139   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:00.003985   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:00.005753   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:00.044898   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:00.167308   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:00.504913   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:00.505058   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:00.544811   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:00.667525   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:01.004540   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:01.005811   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:01.044719   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:01.167749   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:01.504379   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:01.505539   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:01.544194   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:01.667705   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:02.003483   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:02.005319   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:02.044737   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:02.169006   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:02.503745   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:02.505356   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:02.575590   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:02.667627   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:03.003490   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:03.005200   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:03.044622   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:03.168736   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:03.503916   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:03.505328   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:03.544233   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:03.586308   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I0919 22:16:03.667582   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:04.010642   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:04.010718   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:04.045460   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:04.175404   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:04.506771   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:04.507074   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W0919 22:16:04.512297   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:04.512417   19483 retry.go:31] will retry after 44.937679017s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 22:16:04.544492   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:04.667997   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:05.004160   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:05.005502   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:05.044598   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:05.167599   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:05.504001   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:05.505331   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:05.545569   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:05.667877   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:06.004675   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:06.005491   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:06.044438   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:06.167859   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:06.503836   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:06.505374   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:06.544670   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:06.667360   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:07.004848   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:07.004861   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:07.045170   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:07.168051   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:07.503991   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:07.505612   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:07.544629   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:07.666923   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:08.004347   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:08.005587   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:08.044719   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:08.166852   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:08.556416   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:08.556833   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:08.556951   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:08.724913   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:09.004143   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:09.005522   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:09.044688   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:09.167409   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:09.504665   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:09.505010   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:09.545310   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:09.667706   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:10.003685   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:10.005221   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:10.043973   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:10.168000   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:10.503953   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:10.505471   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:10.544384   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:10.667906   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:11.004418   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:11.005412   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:11.044533   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:11.167924   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:11.504594   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:11.505571   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:11.544614   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:11.667019   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:12.004266   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:12.005479   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:12.044781   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:12.167069   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:12.504406   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:12.505588   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:12.604904   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:12.667541   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:13.004036   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:13.005473   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:13.044397   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:13.167790   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:13.505247   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:13.505272   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:13.544337   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:13.668045   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:14.004612   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:14.005399   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:14.044387   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:14.167920   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:14.504550   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:14.504954   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:14.544794   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:14.668264   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:15.003756   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:15.004949   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:15.045074   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:15.168774   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:15.504061   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:15.505293   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:15.605024   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:15.705433   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:16.004475   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:16.110223   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0919 22:16:16.110690   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:16.213415   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:16.504697   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:16.504714   19483 kapi.go:107] duration metric: took 1m16.002160529s to wait for kubernetes.io/minikube-addons=registry ...
	I0919 22:16:16.544769   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:16.667581   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:17.004434   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:17.105293   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:17.167950   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:17.504556   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:17.547059   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:17.668099   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:18.004452   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:18.044430   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:18.168388   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:18.504899   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:18.544510   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:18.669689   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:19.003707   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:19.044748   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:19.167233   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:19.504725   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:19.544569   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:19.669282   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:20.004676   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:20.044345   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:20.168116   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:20.510401   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:20.546083   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:20.669179   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:21.004979   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:21.044986   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:21.167685   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:21.503945   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:21.545033   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:21.668723   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:22.004136   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:22.045728   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:22.167687   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:22.504337   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:22.544890   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:22.667575   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:23.003659   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:23.045067   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:23.167711   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:23.504363   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:23.544037   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:23.667665   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:24.003722   19483 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0919 22:16:24.044993   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:24.167620   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:24.504138   19483 kapi.go:107] duration metric: took 1m24.003478924s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0919 22:16:24.545001   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:24.668167   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:25.044497   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:25.168432   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:25.545553   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:25.668900   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:26.044409   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:26.168258   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:26.544970   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:26.667728   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:27.046126   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:27.167586   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:27.545230   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:27.668677   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:28.044945   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:28.167682   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:28.545509   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:28.668239   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:29.045519   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:29.166986   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:29.544823   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:29.667981   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:30.044564   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0919 22:16:30.167199   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:30.545089   19483 kapi.go:107] duration metric: took 1m29.504047139s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0919 22:16:30.667324   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:31.167366   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:31.668054   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:32.167240   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:32.667633   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:33.168235   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:33.667614   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:34.167782   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:34.667965   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:35.166955   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:35.667672   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:36.168370   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:36.667725   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:37.168374   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:37.667899   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:38.167058   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:38.668002   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:39.167405   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:39.668065   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:40.167227   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:40.667241   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:41.168073   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:41.667582   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:42.167831   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:42.668135   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:43.167250   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:43.667627   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:44.167907   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:44.667624   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:45.168153   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:45.667655   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:46.168359   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:46.668060   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:47.167851   19483 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0919 22:16:47.667208   19483 kapi.go:107] duration metric: took 1m40.502799667s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0919 22:16:47.669251   19483 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-120954 cluster.
	I0919 22:16:47.671090   19483 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0919 22:16:47.672367   19483 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0919 22:16:49.450280   19483 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	W0919 22:16:50.008731   19483 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 22:16:50.008835   19483 out.go:285] ! Enabling 'inspektor-gadget' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: Process exited with status 1
	stdout:
	namespace/gadget unchanged
	serviceaccount/gadget unchanged
	configmap/gadget unchanged
	clusterrole.rbac.authorization.k8s.io/gadget-cluster-role unchanged
	clusterrolebinding.rbac.authorization.k8s.io/gadget-cluster-role-binding unchanged
	role.rbac.authorization.k8s.io/gadget-role unchanged
	rolebinding.rbac.authorization.k8s.io/gadget-role-binding unchanged
	daemonset.apps/gadget configured
	
	stderr:
	error: error validating "/etc/kubernetes/addons/ig-crd.yaml": error validating data: [apiVersion not set, kind not set]; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I0919 22:16:50.011372   19483 out.go:179] * Enabled addons: ingress-dns, cloud-spanner, amd-gpu-device-plugin, registry-creds, storage-provisioner, nvidia-device-plugin, metrics-server, yakd, storage-provisioner-rancher, default-storageclass, volumesnapshots, registry, ingress, csi-hostpath-driver, gcp-auth
	I0919 22:16:50.012783   19483 addons.go:514] duration metric: took 1m51.204884523s for enable addons: enabled=[ingress-dns cloud-spanner amd-gpu-device-plugin registry-creds storage-provisioner nvidia-device-plugin metrics-server yakd storage-provisioner-rancher default-storageclass volumesnapshots registry ingress csi-hostpath-driver gcp-auth]
	I0919 22:16:50.012831   19483 start.go:246] waiting for cluster config update ...
	I0919 22:16:50.012850   19483 start.go:255] writing updated cluster config ...
	I0919 22:16:50.013135   19483 ssh_runner.go:195] Run: rm -f paused
	I0919 22:16:50.016956   19483 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:16:50.020475   19483 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-c8ncc" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:50.024713   19483 pod_ready.go:94] pod "coredns-66bc5c9577-c8ncc" is "Ready"
	I0919 22:16:50.024734   19483 pod_ready.go:86] duration metric: took 4.240185ms for pod "coredns-66bc5c9577-c8ncc" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:50.026643   19483 pod_ready.go:83] waiting for pod "etcd-addons-120954" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:50.030130   19483 pod_ready.go:94] pod "etcd-addons-120954" is "Ready"
	I0919 22:16:50.030155   19483 pod_ready.go:86] duration metric: took 3.486372ms for pod "etcd-addons-120954" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:50.032058   19483 pod_ready.go:83] waiting for pod "kube-apiserver-addons-120954" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:50.036327   19483 pod_ready.go:94] pod "kube-apiserver-addons-120954" is "Ready"
	I0919 22:16:50.036352   19483 pod_ready.go:86] duration metric: took 4.268758ms for pod "kube-apiserver-addons-120954" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:50.038463   19483 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-120954" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:50.420191   19483 pod_ready.go:94] pod "kube-controller-manager-addons-120954" is "Ready"
	I0919 22:16:50.420215   19483 pod_ready.go:86] duration metric: took 381.729367ms for pod "kube-controller-manager-addons-120954" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:50.620936   19483 pod_ready.go:83] waiting for pod "kube-proxy-cvw9l" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:51.020876   19483 pod_ready.go:94] pod "kube-proxy-cvw9l" is "Ready"
	I0919 22:16:51.020907   19483 pod_ready.go:86] duration metric: took 399.942966ms for pod "kube-proxy-cvw9l" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:51.221178   19483 pod_ready.go:83] waiting for pod "kube-scheduler-addons-120954" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:51.620719   19483 pod_ready.go:94] pod "kube-scheduler-addons-120954" is "Ready"
	I0919 22:16:51.620744   19483 pod_ready.go:86] duration metric: took 399.539626ms for pod "kube-scheduler-addons-120954" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:16:51.620755   19483 pod_ready.go:40] duration metric: took 1.603763741s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:16:51.667656   19483 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:16:51.669679   19483 out.go:179] * Done! kubectl is now configured to use "addons-120954" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.544677349Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-bgbdw/POD" id=5291fe09-09dc-42f4-989d-22e6d6ecab3b name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.544742920Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.563525423Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-bgbdw Namespace:default ID:09e525adb4817434956db3f7ae692da34dbca4c885297b7f968b0048c59338bf UID:cff82966-95e6-4c35-ba53-5aadaa4a5755 NetNS:/var/run/netns/c241d372-8a98-4858-a3de-e5c7f98657f2 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.563558756Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-bgbdw to CNI network \"kindnet\" (type=ptp)"
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.573085140Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-bgbdw Namespace:default ID:09e525adb4817434956db3f7ae692da34dbca4c885297b7f968b0048c59338bf UID:cff82966-95e6-4c35-ba53-5aadaa4a5755 NetNS:/var/run/netns/c241d372-8a98-4858-a3de-e5c7f98657f2 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.573236208Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-bgbdw for CNI network kindnet (type=ptp)"
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.573922406Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.574732762Z" level=info msg="Ran pod sandbox 09e525adb4817434956db3f7ae692da34dbca4c885297b7f968b0048c59338bf with infra container: default/hello-world-app-5d498dc89-bgbdw/POD" id=5291fe09-09dc-42f4-989d-22e6d6ecab3b name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.575794893Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=936358ee-5b46-4b11-b530-4459a8b634a1 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.576035615Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=936358ee-5b46-4b11-b530-4459a8b634a1 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.576561083Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=7d168d7e-4bd8-41bb-a9b5-f8d93aa4291b name=/runtime.v1.ImageService/PullImage
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.582905541Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 19 22:19:39 addons-120954 crio[928]: time="2025-09-19 22:19:39.732388272Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.170819649Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=7d168d7e-4bd8-41bb-a9b5-f8d93aa4291b name=/runtime.v1.ImageService/PullImage
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.171441404Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=1f36f0e6-91a6-40e4-a4d5-00cd216998dc name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.171965692Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1f36f0e6-91a6-40e4-a4d5-00cd216998dc name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.173213541Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=bf886ca2-bb55-4234-847d-10426a25a3ab name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.173790669Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=bf886ca2-bb55-4234-847d-10426a25a3ab name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.177532935Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-bgbdw/hello-world-app" id=403ca4ce-b862-45c1-98d1-80b51f1f95e4 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.177932821Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.196404604Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/befcae5e96e2d7ff8a5a8c26f9979abddbc783498d75f20d07b27989873392d1/merged/etc/passwd: no such file or directory"
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.196441298Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/befcae5e96e2d7ff8a5a8c26f9979abddbc783498d75f20d07b27989873392d1/merged/etc/group: no such file or directory"
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.271682280Z" level=info msg="Created container 821bf44fd960df87de80be332607048ec28726f081a2a940942bf9829c5504e8: default/hello-world-app-5d498dc89-bgbdw/hello-world-app" id=403ca4ce-b862-45c1-98d1-80b51f1f95e4 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.272738951Z" level=info msg="Starting container: 821bf44fd960df87de80be332607048ec28726f081a2a940942bf9829c5504e8" id=67c48a4d-870d-4f03-aa71-7c467a129f77 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:19:40 addons-120954 crio[928]: time="2025-09-19 22:19:40.283095869Z" level=info msg="Started container" PID=12297 containerID=821bf44fd960df87de80be332607048ec28726f081a2a940942bf9829c5504e8 description=default/hello-world-app-5d498dc89-bgbdw/hello-world-app id=67c48a4d-870d-4f03-aa71-7c467a129f77 name=/runtime.v1.RuntimeService/StartContainer sandboxID=09e525adb4817434956db3f7ae692da34dbca4c885297b7f968b0048c59338bf
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	821bf44fd960d       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   09e525adb4817       hello-world-app-5d498dc89-bgbdw
	9d2a080022c47       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                              2 minutes ago            Running             nginx                     0                   d0975709fa3ca       nginx
	748745af471c5       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          2 minutes ago            Running             busybox                   0                   2f9d2a1718068       busybox
	9bd2b140fc735       registry.k8s.io/ingress-nginx/controller@sha256:1f7eaeb01933e719c8a9f4acd8181e555e582330c7d50f24484fb64d2ba9b2ef             3 minutes ago            Running             controller                0                   604b11d077a95       ingress-nginx-controller-9cc49f96f-bzbdq
	8fe2da9d40c6e       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:66fdf18cc8a577423b2a36b96a5be40fe690fdb986bfe7875f54edfa9c7d19a5            3 minutes ago            Running             gadget                    0                   6689c01ef91e6       gadget-28pr2
	b1a0282023b5d       8c217da6734db0feee6a8fa1d169714549c20bcb8c123ef218aec5d591e3fd65                                                             3 minutes ago            Exited              patch                     2                   6edfa5b84e409       ingress-nginx-admission-patch-9zsfg
	5cd1d421e1e82       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:050a34002d5bb4966849c880c56c91f5320372564245733b33d4b3461b4dbd24   3 minutes ago            Exited              create                    0                   7c50d6238635d       ingress-nginx-admission-create-lld4n
	913ca573b47e5       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7               3 minutes ago            Running             minikube-ingress-dns      0                   497b006621296       kube-ingress-dns-minikube
	77dfa6e2c39a3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                             3 minutes ago            Running             coredns                   0                   43acef8ba226a       coredns-66bc5c9577-c8ncc
	4d784988fd48b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             3 minutes ago            Running             storage-provisioner       0                   35aeb6d121feb       storage-provisioner
	1eaebc35455cf       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                             4 minutes ago            Running             kindnet-cni               0                   f8de8d1236fcc       kindnet-8tkc8
	aaf9516d307a0       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                             4 minutes ago            Running             kube-proxy                0                   c62b58a4986b8       kube-proxy-cvw9l
	4d27633d5328b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                             4 minutes ago            Running             kube-apiserver            0                   f1aa9075e6aa6       kube-apiserver-addons-120954
	306c54e1b1de4       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                             4 minutes ago            Running             kube-scheduler            0                   a9c55723f9a67       kube-scheduler-addons-120954
	cf46d49792a48       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                             4 minutes ago            Running             kube-controller-manager   0                   2c2a16e195064       kube-controller-manager-addons-120954
	ac4975ea0aee2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                             4 minutes ago            Running             etcd                      0                   80ec6fe50b64c       etcd-addons-120954
	
	
	==> coredns [77dfa6e2c39a3cf18ab3c6ee4a46cccaef2398f42814f95da900096dc467df37] <==
	[INFO] 10.244.0.16:45441 - 28275 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000127633s
	[INFO] 10.244.0.16:47158 - 28538 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000084734s
	[INFO] 10.244.0.16:47158 - 28282 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000083708s
	[INFO] 10.244.0.16:57948 - 47934 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000067085s
	[INFO] 10.244.0.16:57948 - 48147 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000109946s
	[INFO] 10.244.0.16:50105 - 24868 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000104945s
	[INFO] 10.244.0.16:50105 - 24649 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000124588s
	[INFO] 10.244.0.22:32942 - 57221 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000201905s
	[INFO] 10.244.0.22:55983 - 34630 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000276005s
	[INFO] 10.244.0.22:40495 - 19904 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000156049s
	[INFO] 10.244.0.22:41329 - 46782 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000209041s
	[INFO] 10.244.0.22:58838 - 13097 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000140002s
	[INFO] 10.244.0.22:53843 - 25470 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000201999s
	[INFO] 10.244.0.22:44047 - 60731 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.007833846s
	[INFO] 10.244.0.22:37416 - 21699 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.008185286s
	[INFO] 10.244.0.22:36262 - 49358 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007798068s
	[INFO] 10.244.0.22:48273 - 24325 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.007945917s
	[INFO] 10.244.0.22:57951 - 8849 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006264683s
	[INFO] 10.244.0.22:54176 - 27093 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.011013718s
	[INFO] 10.244.0.22:49888 - 55558 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.007001085s
	[INFO] 10.244.0.22:51345 - 37512 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.008106424s
	[INFO] 10.244.0.22:48503 - 64797 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.000858521s
	[INFO] 10.244.0.22:46018 - 1873 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001185519s
	[INFO] 10.244.0.26:52361 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000278863s
	[INFO] 10.244.0.26:60136 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000159423s
	
	
	==> describe nodes <==
	Name:               addons-120954
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-120954
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=addons-120954
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_14_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-120954
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:14:50 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-120954
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:19:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:17:57 +0000   Fri, 19 Sep 2025 22:14:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:17:57 +0000   Fri, 19 Sep 2025 22:14:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:17:57 +0000   Fri, 19 Sep 2025 22:14:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:17:57 +0000   Fri, 19 Sep 2025 22:15:40 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-120954
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 e73d81bc35a54899b69b67c1df39b430
	  System UUID:                48172201-f294-4281-8c5c-8c54bcd17cdc
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m48s
	  default                     hello-world-app-5d498dc89-bgbdw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  gadget                      gadget-28pr2                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  ingress-nginx               ingress-nginx-controller-9cc49f96f-bzbdq    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m40s
	  kube-system                 coredns-66bc5c9577-c8ncc                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m42s
	  kube-system                 etcd-addons-120954                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m49s
	  kube-system                 kindnet-8tkc8                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m42s
	  kube-system                 kube-apiserver-addons-120954                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-controller-manager-addons-120954       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m41s
	  kube-system                 kube-proxy-cvw9l                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m42s
	  kube-system                 kube-scheduler-addons-120954                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m47s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             310Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m40s                  kube-proxy       
	  Normal  Starting                 4m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m52s (x8 over 4m52s)  kubelet          Node addons-120954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m52s (x8 over 4m52s)  kubelet          Node addons-120954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m52s (x8 over 4m52s)  kubelet          Node addons-120954 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m47s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m47s                  kubelet          Node addons-120954 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m47s                  kubelet          Node addons-120954 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m47s                  kubelet          Node addons-120954 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m43s                  node-controller  Node addons-120954 event: Registered Node addons-120954 in Controller
	  Normal  NodeReady                4m                     kubelet          Node addons-120954 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.103037] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029723] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.096733] kauditd_printk_skb: 47 callbacks suppressed
	[Sep19 22:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.041768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.022949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023825] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	
	
	==> etcd [ac4975ea0aee256f9ee4bf44334c27e94f193b2ba0d77d15a874a6041c0d8e7d] <==
	{"level":"warn","ts":"2025-09-19T22:14:50.141487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53332","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:14:50.148607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:14:50.167190Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53366","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:14:50.174925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:14:50.181209Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:14:50.233985Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:01.575596Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:01.583016Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:59186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:27.623573Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:27.630267Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55266","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:27.650039Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55284","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:27.657288Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:55292","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:15:55.346860Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"180.502014ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:15:55.346983Z","caller":"traceutil/trace.go:172","msg":"trace[466676697] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1048; }","duration":"180.646881ms","start":"2025-09-19T22:15:55.166315Z","end":"2025-09-19T22:15:55.346962Z","steps":["trace[466676697] 'range keys from in-memory index tree'  (duration: 180.427925ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:16:02.430728Z","caller":"traceutil/trace.go:172","msg":"trace[1707911142] transaction","detail":"{read_only:false; response_revision:1100; number_of_response:1; }","duration":"102.978145ms","start":"2025-09-19T22:16:02.327720Z","end":"2025-09-19T22:16:02.430698Z","steps":["trace[1707911142] 'process raft request'  (duration: 45.989126ms)","trace[1707911142] 'compare'  (duration: 56.817625ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T22:16:02.499075Z","caller":"traceutil/trace.go:172","msg":"trace[1757402400] transaction","detail":"{read_only:false; response_revision:1102; number_of_response:1; }","duration":"113.900411ms","start":"2025-09-19T22:16:02.385159Z","end":"2025-09-19T22:16:02.499059Z","steps":["trace[1757402400] 'process raft request'  (duration: 113.790121ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:16:08.554622Z","caller":"traceutil/trace.go:172","msg":"trace[1418591593] transaction","detail":"{read_only:false; response_revision:1128; number_of_response:1; }","duration":"151.786527ms","start":"2025-09-19T22:16:08.402816Z","end":"2025-09-19T22:16:08.554602Z","steps":["trace[1418591593] 'process raft request'  (duration: 151.652297ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:16:08.715196Z","caller":"traceutil/trace.go:172","msg":"trace[1769386827] linearizableReadLoop","detail":"{readStateIndex:1160; appliedIndex:1160; }","duration":"131.869452ms","start":"2025-09-19T22:16:08.583308Z","end":"2025-09-19T22:16:08.715177Z","steps":["trace[1769386827] 'read index received'  (duration: 131.86223ms)","trace[1769386827] 'applied index is now lower than readState.Index'  (duration: 5.915µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:16:08.722935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"139.613515ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:16:08.722995Z","caller":"traceutil/trace.go:172","msg":"trace[1775529708] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1128; }","duration":"139.687509ms","start":"2025-09-19T22:16:08.583295Z","end":"2025-09-19T22:16:08.722982Z","steps":["trace[1775529708] 'agreement among raft nodes before linearized reading'  (duration: 131.947209ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:16:08.723180Z","caller":"traceutil/trace.go:172","msg":"trace[116549510] transaction","detail":"{read_only:false; response_revision:1129; number_of_response:1; }","duration":"161.220041ms","start":"2025-09-19T22:16:08.561943Z","end":"2025-09-19T22:16:08.723163Z","steps":["trace[116549510] 'process raft request'  (duration: 153.291ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:16:16.108774Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"103.739687ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:16:16.108842Z","caller":"traceutil/trace.go:172","msg":"trace[1191953837] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1179; }","duration":"103.815549ms","start":"2025-09-19T22:16:16.005011Z","end":"2025-09-19T22:16:16.108826Z","steps":["trace[1191953837] 'agreement among raft nodes before linearized reading'  (duration: 43.69495ms)","trace[1191953837] 'range keys from in-memory index tree'  (duration: 60.0179ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T22:16:16.108989Z","caller":"traceutil/trace.go:172","msg":"trace[1715911120] transaction","detail":"{read_only:false; response_revision:1180; number_of_response:1; }","duration":"105.088224ms","start":"2025-09-19T22:16:16.003881Z","end":"2025-09-19T22:16:16.108969Z","steps":["trace[1715911120] 'process raft request'  (duration: 44.834323ms)","trace[1715911120] 'compare'  (duration: 60.061641ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T22:16:22.356918Z","caller":"traceutil/trace.go:172","msg":"trace[1162455021] transaction","detail":"{read_only:false; response_revision:1205; number_of_response:1; }","duration":"104.084451ms","start":"2025-09-19T22:16:22.252815Z","end":"2025-09-19T22:16:22.356900Z","steps":["trace[1162455021] 'process raft request'  (duration: 58.44145ms)","trace[1162455021] 'compare'  (duration: 45.562483ms)"],"step_count":2}
	
	
	==> kernel <==
	 22:19:41 up  1:02,  0 users,  load average: 0.40, 0.66, 0.34
	Linux addons-120954 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [1eaebc35455cfb0d0ee446f1f3befcb9e0574abb8fd4009615bc10fadcd3feb4] <==
	I0919 22:17:39.791497       1 main.go:301] handling current node
	I0919 22:17:49.791352       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:17:49.791402       1 main.go:301] handling current node
	I0919 22:17:59.791406       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:17:59.791435       1 main.go:301] handling current node
	I0919 22:18:09.791814       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:18:09.791858       1 main.go:301] handling current node
	I0919 22:18:19.791491       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:18:19.791520       1 main.go:301] handling current node
	I0919 22:18:29.791443       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:18:29.791489       1 main.go:301] handling current node
	I0919 22:18:39.791399       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:18:39.791432       1 main.go:301] handling current node
	I0919 22:18:49.791716       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:18:49.791756       1 main.go:301] handling current node
	I0919 22:18:59.791774       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:18:59.791835       1 main.go:301] handling current node
	I0919 22:19:09.791441       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:19:09.791479       1 main.go:301] handling current node
	I0919 22:19:19.791551       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:19:19.791588       1 main.go:301] handling current node
	I0919 22:19:29.791928       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:19:29.791964       1 main.go:301] handling current node
	I0919 22:19:39.791867       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:19:39.791905       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4d27633d5328b45f2537a03acf72e994625c24ac9d6ce8b0e85f2aadfecc316e] <==
	E0919 22:17:02.445215       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36198: use of closed network connection
	E0919 22:17:02.630947       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36222: use of closed network connection
	I0919 22:17:11.763305       1 alloc.go:328] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.101.215.149"}
	I0919 22:17:12.568384       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:17:17.957599       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I0919 22:17:18.129649       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.105.211.186"}
	I0919 22:17:19.246805       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:17:47.280809       1 authentication.go:75] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0919 22:17:50.343732       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0919 22:17:51.042753       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0919 22:18:06.637096       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:18:06.637169       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 22:18:06.652671       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:18:06.652818       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 22:18:06.666401       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:18:06.666450       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0919 22:18:06.684519       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0919 22:18:06.684558       1 handler.go:285] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0919 22:18:07.653175       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0919 22:18:07.685189       1 cacher.go:182] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0919 22:18:07.703055       1 cacher.go:182] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I0919 22:18:17.178377       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:18:33.081111       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:19:37.438301       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:19:39.320498       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.99.104.139"}
	
	
	==> kube-controller-manager [cf46d49792a482575f0f0712f5a94039d15b2f3d8c96ec8f7dda74e4fbffd145] <==
	E0919 22:18:15.609438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:18:16.629760       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:16.630699       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:18:23.249498       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:23.250369       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:18:23.865712       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:23.866702       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:18:24.795656       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:24.796451       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	I0919 22:18:27.747416       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I0919 22:18:27.747451       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:18:27.773159       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I0919 22:18:27.773214       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 22:18:40.045013       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:40.045891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:18:45.449234       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:45.449996       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:18:46.479777       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:18:46.480700       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:19:05.671033       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:05.671969       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:19:20.387535       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:20.388486       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	E0919 22:19:29.412392       1 reflector.go:422] "The watchlist request ended with an error, falling back to the standard LIST/WATCH semantics because making progress is better than deadlocking" err="the server could not find the requested resource"
	E0919 22:19:29.413542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError" reflector="k8s.io/client-go/metadata/metadatainformer/informer.go:138" type="*v1.PartialObjectMetadata"
	
	
	==> kube-proxy [aaf9516d307a03b7d018d8f06c38fff38487c37d2ff8024083928e4bd139982a] <==
	I0919 22:14:59.337613       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:14:59.857297       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:14:59.957489       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:14:59.963181       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:14:59.967066       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:15:00.080593       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:15:00.080753       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:15:00.160611       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:15:00.168515       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:15:00.168775       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:15:00.172162       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:15:00.172250       1 config.go:309] "Starting node config controller"
	I0919 22:15:00.172268       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:15:00.172275       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:15:00.172250       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:15:00.172464       1 config.go:200] "Starting service config controller"
	I0919 22:15:00.172747       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:15:00.172540       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:15:00.172826       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:15:00.272832       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:15:00.272903       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:15:00.272922       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [306c54e1b1de43ccf01f345c1b3518fcb6aaae77d673d5391cecf424769879db] <==
	E0919 22:14:50.633981       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:14:50.633977       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:14:50.634096       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 22:14:50.634689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:14:50.634857       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:14:50.634900       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:14:50.635126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:14:50.635471       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:14:50.635524       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:14:50.635597       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:14:50.636125       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:14:51.481205       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:14:51.524405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:14:51.532854       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 22:14:51.579290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:14:51.594299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:14:51.699793       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 22:14:51.708871       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 22:14:51.772453       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:14:51.797488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 22:14:51.839904       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:14:51.839915       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:14:51.859705       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:14:52.105150       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I0919 22:14:55.230262       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:18:09 addons-120954 kubelet[1558]: I0919 22:18:09.131777    1558 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90ef02f0-bf4f-410d-a3bb-2dea0f0f30d4" path="/var/lib/kubelet/pods/90ef02f0-bf4f-410d-a3bb-2dea0f0f30d4/volumes"
	Sep 19 22:18:09 addons-120954 kubelet[1558]: I0919 22:18:09.132243    1558 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae07922c-b911-4895-9202-b690a79195bb" path="/var/lib/kubelet/pods/ae07922c-b911-4895-9202-b690a79195bb/volumes"
	Sep 19 22:18:09 addons-120954 kubelet[1558]: I0919 22:18:09.132506    1558 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ea5638dc-d06c-4735-ab40-2da5f58c91f9" path="/var/lib/kubelet/pods/ea5638dc-d06c-4735-ab40-2da5f58c91f9/volumes"
	Sep 19 22:18:13 addons-120954 kubelet[1558]: E0919 22:18:13.166944    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320293166696688  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:18:13 addons-120954 kubelet[1558]: E0919 22:18:13.166975    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320293166696688  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:18:22 addons-120954 kubelet[1558]: I0919 22:18:22.129784    1558 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 19 22:18:23 addons-120954 kubelet[1558]: E0919 22:18:23.170214    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320303169901137  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:18:23 addons-120954 kubelet[1558]: E0919 22:18:23.170258    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320303169901137  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:18:33 addons-120954 kubelet[1558]: E0919 22:18:33.172384    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320313172173272  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:18:33 addons-120954 kubelet[1558]: E0919 22:18:33.172414    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320313172173272  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:18:43 addons-120954 kubelet[1558]: E0919 22:18:43.174932    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320323174650533  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:18:43 addons-120954 kubelet[1558]: E0919 22:18:43.174964    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320323174650533  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:18:53 addons-120954 kubelet[1558]: E0919 22:18:53.177164    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320333176773750  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:18:53 addons-120954 kubelet[1558]: E0919 22:18:53.177207    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320333176773750  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:19:03 addons-120954 kubelet[1558]: E0919 22:19:03.179831    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320343179576170  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:19:03 addons-120954 kubelet[1558]: E0919 22:19:03.179859    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320343179576170  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:19:13 addons-120954 kubelet[1558]: E0919 22:19:13.182446    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320353182200743  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:19:13 addons-120954 kubelet[1558]: E0919 22:19:13.182480    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320353182200743  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:19:23 addons-120954 kubelet[1558]: E0919 22:19:23.185405    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320363185145814  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:19:23 addons-120954 kubelet[1558]: E0919 22:19:23.185436    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320363185145814  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:19:33 addons-120954 kubelet[1558]: E0919 22:19:33.187910    1558 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758320373187648269  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:19:33 addons-120954 kubelet[1558]: E0919 22:19:33.187949    1558 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758320373187648269  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:608711}  inodes_used:{value:230}}"
	Sep 19 22:19:36 addons-120954 kubelet[1558]: I0919 22:19:36.130458    1558 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Sep 19 22:19:39 addons-120954 kubelet[1558]: I0919 22:19:39.295719    1558 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2jss\" (UniqueName: \"kubernetes.io/projected/cff82966-95e6-4c35-ba53-5aadaa4a5755-kube-api-access-x2jss\") pod \"hello-world-app-5d498dc89-bgbdw\" (UID: \"cff82966-95e6-4c35-ba53-5aadaa4a5755\") " pod="default/hello-world-app-5d498dc89-bgbdw"
	Sep 19 22:19:41 addons-120954 kubelet[1558]: I0919 22:19:41.079645    1558 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-bgbdw" podStartSLOduration=1.48364817 podStartE2EDuration="2.079623737s" podCreationTimestamp="2025-09-19 22:19:39 +0000 UTC" firstStartedPulling="2025-09-19 22:19:39.576245596 +0000 UTC m=+286.534967934" lastFinishedPulling="2025-09-19 22:19:40.172221119 +0000 UTC m=+287.130943501" observedRunningTime="2025-09-19 22:19:41.079618188 +0000 UTC m=+288.038340523" watchObservedRunningTime="2025-09-19 22:19:41.079623737 +0000 UTC m=+288.038346075"
	
	
	==> storage-provisioner [4d784988fd48b11a0b4c665041c8f9b15f635efdf3c85830a3caf1da1ea5394c] <==
	W0919 22:19:15.952444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:17.955814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:17.960868       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:19.964355       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:19.968201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:21.971943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:21.975719       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:23.978343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:23.982969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:25.985578       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:25.989331       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:27.992449       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:27.996467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:29.999078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:30.003474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:32.006864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:32.013435       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:34.016774       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:34.020936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:36.023987       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:36.028955       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:38.031755       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:38.036467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:40.039548       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:19:40.043961       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-120954 -n addons-120954
helpers_test.go:269: (dbg) Run:  kubectl --context addons-120954 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-lld4n ingress-nginx-admission-patch-9zsfg
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-120954 describe pod ingress-nginx-admission-create-lld4n ingress-nginx-admission-patch-9zsfg
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-120954 describe pod ingress-nginx-admission-create-lld4n ingress-nginx-admission-patch-9zsfg: exit status 1 (62.606751ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-lld4n" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-9zsfg" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-120954 describe pod ingress-nginx-admission-create-lld4n ingress-nginx-admission-patch-9zsfg: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-120954 addons disable ingress-dns --alsologtostderr -v=1: (1.149980149s)
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-120954 addons disable ingress --alsologtostderr -v=1: (7.724976828s)
--- FAIL: TestAddons/parallel/Ingress (153.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (603.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-393395 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-393395 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-cc7xp" [f103572c-81a5-4040-b7e8-02f1205d561a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: rate: Wait(n=1) would exceed context deadline
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-393395 -n functional-393395
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-09-19 22:33:13.827190478 +0000 UTC m=+1152.535851451
functional_test.go:1645: (dbg) Run:  kubectl --context functional-393395 describe po hello-node-connect-7d85dfc575-cc7xp -n default
functional_test.go:1645: (dbg) kubectl --context functional-393395 describe po hello-node-connect-7d85dfc575-cc7xp -n default:
Name:             hello-node-connect-7d85dfc575-cc7xp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-393395/192.168.49.2
Start Time:       Fri, 19 Sep 2025 22:23:13 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2lldd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2lldd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cc7xp to functional-393395
Normal   Pulling    7m1s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m1s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m56s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-393395 logs hello-node-connect-7d85dfc575-cc7xp -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-393395 logs hello-node-connect-7d85dfc575-cc7xp -n default: exit status 1 (65.836284ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cc7xp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-393395 logs hello-node-connect-7d85dfc575-cc7xp -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-393395 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-cc7xp
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-393395/192.168.49.2
Start Time:       Fri, 19 Sep 2025 22:23:13 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.9
IPs:
IP:           10.244.0.9
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2lldd (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2lldd:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cc7xp to functional-393395
Normal   Pulling    7m2s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m2s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m2s (x5 over 10m)    kubelet            Error: ErrImagePull
Warning  Failed     4m57s (x20 over 10m)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m44s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-393395 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-393395 logs -l app=hello-node-connect: exit status 1 (63.479591ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-cc7xp" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-393395 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-393395 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.109.191.201
IPs:                      10.109.191.201
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  30998/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-393395
helpers_test.go:243: (dbg) docker inspect functional-393395:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "96d0f85e38f5ccd0d42510a6c87306ed3747fb691896f7d1bb02cb5f9879205d",
	        "Created": "2025-09-19T22:20:51.291987796Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 44934,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:20:51.330771788Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/96d0f85e38f5ccd0d42510a6c87306ed3747fb691896f7d1bb02cb5f9879205d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/96d0f85e38f5ccd0d42510a6c87306ed3747fb691896f7d1bb02cb5f9879205d/hostname",
	        "HostsPath": "/var/lib/docker/containers/96d0f85e38f5ccd0d42510a6c87306ed3747fb691896f7d1bb02cb5f9879205d/hosts",
	        "LogPath": "/var/lib/docker/containers/96d0f85e38f5ccd0d42510a6c87306ed3747fb691896f7d1bb02cb5f9879205d/96d0f85e38f5ccd0d42510a6c87306ed3747fb691896f7d1bb02cb5f9879205d-json.log",
	        "Name": "/functional-393395",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-393395:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-393395",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "96d0f85e38f5ccd0d42510a6c87306ed3747fb691896f7d1bb02cb5f9879205d",
	                "LowerDir": "/var/lib/docker/overlay2/2e24d9089b286c99bd5221e3e8075a72b8cb14cd911a4719aeaa96cc28d919b4-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2e24d9089b286c99bd5221e3e8075a72b8cb14cd911a4719aeaa96cc28d919b4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2e24d9089b286c99bd5221e3e8075a72b8cb14cd911a4719aeaa96cc28d919b4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2e24d9089b286c99bd5221e3e8075a72b8cb14cd911a4719aeaa96cc28d919b4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-393395",
	                "Source": "/var/lib/docker/volumes/functional-393395/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-393395",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-393395",
	                "name.minikube.sigs.k8s.io": "functional-393395",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "69d73a58813a76fa7f01dff7a18725d56d7255c149b221752b41438d4496d809",
	            "SandboxKey": "/var/run/docker/netns/69d73a58813a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-393395": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2e:a8:0b:f6:0b:5d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4fa6c6ffe98aa9c532d0fb5c99f2eac5fa857ef0cd48d35dbdc1bfae1703af9f",
	                    "EndpointID": "cc89c414d7d6af468ad25b09b5dace559ccd1b2bc3c84a92765d7ca170f51343",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-393395",
	                        "96d0f85e38f5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-393395 -n functional-393395
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-393395 logs -n 25: (1.585876582s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                  ARGS                                                  │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-393395 --kill=true                                                                       │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │                     │
	│ addons         │ functional-393395 addons list                                                                          │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ addons         │ functional-393395 addons list -o json                                                                  │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ ssh            │ functional-393395 ssh sudo cat /etc/ssl/certs/18175.pem                                                │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ ssh            │ functional-393395 ssh sudo cat /usr/share/ca-certificates/18175.pem                                    │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ ssh            │ functional-393395 ssh sudo cat /etc/ssl/certs/51391683.0                                               │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ ssh            │ functional-393395 ssh sudo cat /etc/ssl/certs/181752.pem                                               │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ ssh            │ functional-393395 ssh sudo cat /usr/share/ca-certificates/181752.pem                                   │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ ssh            │ functional-393395 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                               │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ ssh            │ functional-393395 ssh sudo cat /etc/test/nested/copy/18175/hosts                                       │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ image          │ functional-393395 image ls --format short --alsologtostderr                                            │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ image          │ functional-393395 image ls --format yaml --alsologtostderr                                             │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ ssh            │ functional-393395 ssh pgrep buildkitd                                                                  │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │                     │
	│ image          │ functional-393395 image build -t localhost/my-image:functional-393395 testdata/build --alsologtostderr │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ image          │ functional-393395 image ls --format json --alsologtostderr                                             │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ image          │ functional-393395 image ls --format table --alsologtostderr                                            │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ update-context │ functional-393395 update-context --alsologtostderr -v=2                                                │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ update-context │ functional-393395 update-context --alsologtostderr -v=2                                                │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ update-context │ functional-393395 update-context --alsologtostderr -v=2                                                │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ image          │ functional-393395 image ls                                                                             │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:23 UTC │ 19 Sep 25 22:23 UTC │
	│ service        │ functional-393395 service list                                                                         │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:32 UTC │ 19 Sep 25 22:33 UTC │
	│ service        │ functional-393395 service list -o json                                                                 │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │ 19 Sep 25 22:33 UTC │
	│ service        │ functional-393395 service --namespace=default --https --url hello-node                                 │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │                     │
	│ service        │ functional-393395 service hello-node --url --format={{.IP}}                                            │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │                     │
	│ service        │ functional-393395 service hello-node --url                                                             │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	I0919 22:23:00.337823   55715 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	Log file created at: 2025/09/19 22:23:00
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:23:00.340147   55715 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:23:00.337874   55725 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:23:00.338169   55725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:00.338180   55725 out.go:374] Setting ErrFile to fd 2...
	I0919 22:23:00.338184   55725 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:00.338596   55725 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:23:00.339262   55725 out.go:368] Setting JSON to false
	I0919 22:23:00.340520   55725 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3930,"bootTime":1758316650,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:23:00.340657   55725 start.go:140] virtualization: kvm guest
	I0919 22:23:00.344305   55725 out.go:179] * [functional-393395] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:23:00.346186   55725 notify.go:220] Checking for updates...
	I0919 22:23:00.346245   55725 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:23:00.348179   55725 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:23:00.349805   55725 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:23:00.351533   55725 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:23:00.353241   55725 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:23:00.354766   55725 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:23:00.342342   55715 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:23:00.343064   55715 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:23:00.369714   55715 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:23:00.369870   55715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:00.445595   55715 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-19 22:23:00.430589902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:00.445709   55715 docker.go:318] overlay module found
	I0919 22:23:00.450394   55715 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0919 22:23:00.452886   55715 start.go:304] selected driver: docker
	I0919 22:23:00.452906   55715 start.go:918] validating driver "docker" against &{Name:functional-393395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-393395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:00.452996   55715 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:23:00.455058   55715 out.go:203] 
	W0919 22:23:00.456542   55715 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 22:23:00.356557   55725 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:23:00.357058   55725 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:23:00.383656   55725 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:23:00.383746   55725 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:00.457139   55725 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-19 22:23:00.446772607 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:00.457285   55725 docker.go:318] overlay module found
	I0919 22:23:00.458285   55715 out.go:203] 
	I0919 22:23:00.459143   55725 out.go:179] * Using the docker driver based on existing profile
	I0919 22:23:00.460705   55725 start.go:304] selected driver: docker
	I0919 22:23:00.460719   55725 start.go:918] validating driver "docker" against &{Name:functional-393395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-393395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:00.460831   55725 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:23:00.460938   55725 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:00.531851   55725 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-19 22:23:00.520003264 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:00.532834   55725 cni.go:84] Creating CNI manager for ""
	I0919 22:23:00.532913   55725 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 22:23:00.532973   55725 start.go:348] cluster config:
	{Name:functional-393395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-393395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket:
NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:00.535266   55725 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Sep 19 22:23:27 functional-393395 crio[4232]: time="2025-09-19 22:23:27.961268406Z" level=info msg="Started container" PID=8397 containerID=dfae97536d976b98495367a4d318ad9e89e0b956268ce22c20fa2d7084dddeff description=default/sp-pod/myfrontend id=b99dadd9-b803-4bda-abdb-5fda96ae2dcf name=/runtime.v1.RuntimeService/StartContainer sandboxID=3f367b6ab7a9096ad658eb54fc6111ae0b8293f05e637940514bc1c5f2176134
	Sep 19 22:23:29 functional-393395 crio[4232]: time="2025-09-19 22:23:29.296560368Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3ddbef22-52c8-450c-a1d3-9228b8de457f name=/runtime.v1.ImageService/PullImage
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.301835166Z" level=info msg="Stopping pod sandbox: 3a1f77469e585028b35225ab01fec8ddcf4a56ddcd78ac63b05e94a7b9672b0d" id=a1043bad-3392-4a47-88aa-7a3019932dc9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.301873902Z" level=info msg="Stopped pod sandbox (already stopped): 3a1f77469e585028b35225ab01fec8ddcf4a56ddcd78ac63b05e94a7b9672b0d" id=a1043bad-3392-4a47-88aa-7a3019932dc9 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.302364701Z" level=info msg="Removing pod sandbox: 3a1f77469e585028b35225ab01fec8ddcf4a56ddcd78ac63b05e94a7b9672b0d" id=69ea0a56-2d3d-48d8-b602-c5a0fe87df8a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.310638043Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.310917884Z" level=info msg="Removed pod sandbox: 3a1f77469e585028b35225ab01fec8ddcf4a56ddcd78ac63b05e94a7b9672b0d" id=69ea0a56-2d3d-48d8-b602-c5a0fe87df8a name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.311793515Z" level=info msg="Stopping pod sandbox: ffa6b2b4c12ec942bc0f6fb17122177531825e1a1a924dae76caa61f38db82ba" id=ee5c27e3-5195-45a2-8f02-081c6293aa04 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.311843722Z" level=info msg="Stopped pod sandbox (already stopped): ffa6b2b4c12ec942bc0f6fb17122177531825e1a1a924dae76caa61f38db82ba" id=ee5c27e3-5195-45a2-8f02-081c6293aa04 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.312420895Z" level=info msg="Removing pod sandbox: ffa6b2b4c12ec942bc0f6fb17122177531825e1a1a924dae76caa61f38db82ba" id=550068e5-9412-4579-a457-b9153769c653 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.319461268Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.319497454Z" level=info msg="Removed pod sandbox: ffa6b2b4c12ec942bc0f6fb17122177531825e1a1a924dae76caa61f38db82ba" id=550068e5-9412-4579-a457-b9153769c653 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.319978938Z" level=info msg="Stopping pod sandbox: afb7f2dd6427094e1c71ca64d961b1ae20c0b0d6d16d0bb11ab0244135ea3396" id=f2bf596d-2937-46b0-a184-eebb02aa080e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.320041102Z" level=info msg="Stopped pod sandbox (already stopped): afb7f2dd6427094e1c71ca64d961b1ae20c0b0d6d16d0bb11ab0244135ea3396" id=f2bf596d-2937-46b0-a184-eebb02aa080e name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.320452045Z" level=info msg="Removing pod sandbox: afb7f2dd6427094e1c71ca64d961b1ae20c0b0d6d16d0bb11ab0244135ea3396" id=353d5e8e-3ab6-4bad-9379-e25ad0aba272 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.326018306Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 19 22:23:31 functional-393395 crio[4232]: time="2025-09-19 22:23:31.326051908Z" level=info msg="Removed pod sandbox: afb7f2dd6427094e1c71ca64d961b1ae20c0b0d6d16d0bb11ab0244135ea3396" id=353d5e8e-3ab6-4bad-9379-e25ad0aba272 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 19 22:23:38 functional-393395 crio[4232]: time="2025-09-19 22:23:38.295947731Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=6946d2a0-a744-47d3-9b6d-f446ab31bf6b name=/runtime.v1.ImageService/PullImage
	Sep 19 22:23:52 functional-393395 crio[4232]: time="2025-09-19 22:23:52.296424347Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=102f82e5-ad7a-4985-b088-e57bd190b190 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:24:28 functional-393395 crio[4232]: time="2025-09-19 22:24:28.296286577Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=ebb97ec1-de55-4ae2-8b49-287ab047ca61 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:24:42 functional-393395 crio[4232]: time="2025-09-19 22:24:42.296193506Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=c7c53202-a11d-466c-992b-3798b1df5b16 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:25:57 functional-393395 crio[4232]: time="2025-09-19 22:25:57.296153145Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8c079879-307c-43ce-bc4a-b3afafebeba2 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:26:12 functional-393395 crio[4232]: time="2025-09-19 22:26:12.295704188Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3d096a4c-9c2c-44ab-846f-737931c391d9 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:28:49 functional-393395 crio[4232]: time="2025-09-19 22:28:49.296585726Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=622ae1cb-a714-4f83-a952-020c41cda645 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:28:57 functional-393395 crio[4232]: time="2025-09-19 22:28:57.296165453Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=201d5d32-e038-47bc-a457-72945ff5c817 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	dfae97536d976       docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285                  9 minutes ago       Running             myfrontend                  0                   3f367b6ab7a90       sp-pod
	d1eacfb27f1f2       docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb                  9 minutes ago       Running             mysql                       0                   771da0114be46       mysql-5bb876957f-9dhtx
	dbbeb2b25416a       docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8                  10 minutes ago      Running             nginx                       0                   d1967e3989c0c       nginx-svc
	134c22d7171b0       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   10 minutes ago      Running             dashboard-metrics-scraper   0                   519587e9e75b7       dashboard-metrics-scraper-77bf4d6c4c-wdsps
	513bb8a7ba791       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         10 minutes ago      Running             kubernetes-dashboard        0                   5972947a6a278       kubernetes-dashboard-855c9754f9-g2cpj
	608bbf4467665       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              10 minutes ago      Exited              mount-munger                0                   fce566c17f29f       busybox-mount
	defa3ebf1cb42       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         2                   0e4fd2e2ee4ba       storage-provisioner
	227c8e07530f4       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                                 10 minutes ago      Running             kube-apiserver              0                   667b17a39fc15       kube-apiserver-functional-393395
	1f5dbf8fcaad9       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 10 minutes ago      Running             kube-controller-manager     2                   70b5a9e883c6d       kube-controller-manager-functional-393395
	eb7ef0748ec28       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   e031663b009a9       etcd-functional-393395
	ab7758d673f12       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                                 10 minutes ago      Exited              kube-controller-manager     1                   70b5a9e883c6d       kube-controller-manager-functional-393395
	d40f8efdb892c       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 10 minutes ago      Running             kube-scheduler              1                   59289c904202c       kube-scheduler-functional-393395
	fa9bde04dc4ad       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   7d0613e29302b       kindnet-2bp8v
	3451a960bc765       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 10 minutes ago      Running             kube-proxy                  1                   2a8ebd1a8a501       kube-proxy-4jqz7
	ace6892425f10       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Exited              storage-provisioner         1                   0e4fd2e2ee4ba       storage-provisioner
	f80f08e23c33d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   9b4baee72f410       coredns-66bc5c9577-b4xmf
	3b2f59e9fdc00       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   9b4baee72f410       coredns-66bc5c9577-b4xmf
	f7d07a9171a27       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 12 minutes ago      Exited              kindnet-cni                 0                   7d0613e29302b       kindnet-2bp8v
	0910015b5fbd0       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                                 12 minutes ago      Exited              kube-proxy                  0                   2a8ebd1a8a501       kube-proxy-4jqz7
	42aeb3629d383       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                                 12 minutes ago      Exited              kube-scheduler              0                   59289c904202c       kube-scheduler-functional-393395
	1acd908b7f7cf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 12 minutes ago      Exited              etcd                        0                   e031663b009a9       etcd-functional-393395
	
	
	==> coredns [3b2f59e9fdc00d374370fa6d680d2d42dabb2d1c265ce4b6956c5b340cc71e45] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:44669 - 29467 "HINFO IN 4476685561075888730.3786100218418605311. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.024978071s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [f80f08e23c33dcdf58d8fa04b80cda128417bcd02252b085ef6ed927b12c21be] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47227 - 37369 "HINFO IN 457564482741273302.483101815254904987. udp 55 false 512" NXDOMAIN qr,rd,ra 130 0.024947598s
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: connect: connection refused
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               functional-393395
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-393395
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=functional-393395
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_21_07_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:21:03 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-393395
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:33:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:29:43 +0000   Fri, 19 Sep 2025 22:21:02 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:29:43 +0000   Fri, 19 Sep 2025 22:21:02 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:29:43 +0000   Fri, 19 Sep 2025 22:21:02 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:29:43 +0000   Fri, 19 Sep 2025 22:21:52 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-393395
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 5300435b4b08447793906209fd1145fb
	  System UUID:                00e4f6c7-160a-4b01-8209-e2bb56c92d7a
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-kwkd9                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-cc7xp           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-9dhtx                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m55s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m50s
	  kube-system                 coredns-66bc5c9577-b4xmf                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     12m
	  kube-system                 etcd-functional-393395                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-2bp8v                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      12m
	  kube-system                 kube-apiserver-functional-393395              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-393395     200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-proxy-4jqz7                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-scheduler-functional-393395              100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         12m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wdsps    0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-g2cpj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)  kubelet          Node functional-393395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)  kubelet          Node functional-393395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)  kubelet          Node functional-393395 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     12m                kubelet          Node functional-393395 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  12m                kubelet          Node functional-393395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    12m                kubelet          Node functional-393395 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 12m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           12m                node-controller  Node functional-393395 event: Registered Node functional-393395 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-393395 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-393395 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-393395 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-393395 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-393395 event: Registered Node functional-393395 in Controller
	
	
	==> dmesg <==
	[  +0.103037] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029723] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.096733] kauditd_printk_skb: 47 callbacks suppressed
	[Sep19 22:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.041768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.022949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023825] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	
	
	==> etcd [1acd908b7f7cf1f7b24e34bc57505b5406a2d31e16a86b5ce5edde762cdaf4e2] <==
	{"level":"warn","ts":"2025-09-19T22:21:03.353534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:21:03.359987Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:21:03.366592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:21:03.374123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42758","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:21:03.386405Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:21:03.393893Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42802","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:21:03.401613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42812","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:22:29.612923Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-09-19T22:22:29.613015Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-393395","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-09-19T22:22:29.613139Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-19T22:22:29.614712Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-09-19T22:22:29.616121Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:22:29.616177Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"warn","ts":"2025-09-19T22:22:29.616202Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2025-09-19T22:22:29.616251Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"info","ts":"2025-09-19T22:22:29.616252Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-09-19T22:22:29.616273Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-19T22:22:29.616284Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-09-19T22:22:29.616204Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-09-19T22:22:29.616297Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-09-19T22:22:29.616304Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:22:29.618691Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-09-19T22:22:29.618758Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-09-19T22:22:29.618793Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-09-19T22:22:29.618804Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-393395","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [eb7ef0748ec286eac5617287b6e931129e297aaf7f7f2304ece6c7591341542d] <==
	{"level":"warn","ts":"2025-09-19T22:22:33.123237Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.129447Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.136379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40578","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.149418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.155691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.162071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.171333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.177366Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.183299Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.189428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.196048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.202624Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40726","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.215608Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.221887Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.228207Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40786","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.234514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40818","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.243300Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40828","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.259803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.266314Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.272630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:22:33.321317Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40886","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:23:25.201638Z","caller":"traceutil/trace.go:172","msg":"trace[611199664] transaction","detail":"{read_only:false; response_revision:835; number_of_response:1; }","duration":"123.339342ms","start":"2025-09-19T22:23:25.078283Z","end":"2025-09-19T22:23:25.201622Z","steps":["trace[611199664] 'process raft request'  (duration: 120.256329ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:32:32.821984Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1147}
	{"level":"info","ts":"2025-09-19T22:32:32.842846Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1147,"took":"20.455042ms","hash":1742729636,"current-db-size-bytes":3342336,"current-db-size":"3.3 MB","current-db-size-in-use-bytes":1544192,"current-db-size-in-use":"1.5 MB"}
	{"level":"info","ts":"2025-09-19T22:32:32.842907Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1742729636,"revision":1147,"compact-revision":-1}
	
	
	==> kernel <==
	 22:33:15 up  1:15,  0 users,  load average: 0.12, 0.20, 0.31
	Linux functional-393395 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [f7d07a9171a273410895313b1c2073b03be31506777dce310965addefebb25ea] <==
	I0919 22:21:12.243941       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 22:21:12.244786       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I0919 22:21:12.245086       1 main.go:148] setting mtu 1500 for CNI 
	I0919 22:21:12.245133       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 22:21:12.245167       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T22:21:12Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 22:21:12.357982       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 22:21:12.358006       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 22:21:12.358014       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 22:21:12.443900       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E0919 22:21:42.358570       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E0919 22:21:42.358570       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E0919 22:21:42.444243       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E0919 22:21:42.444322       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I0919 22:21:43.859174       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 22:21:43.859200       1 metrics.go:72] Registering metrics
	I0919 22:21:43.859267       1 controller.go:711] "Syncing nftables rules"
	I0919 22:21:52.365063       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:21:52.365160       1 main.go:301] handling current node
	I0919 22:22:02.365442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:22:02.365481       1 main.go:301] handling current node
	I0919 22:22:12.360321       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:22:12.360360       1 main.go:301] handling current node
	
	
	==> kindnet [fa9bde04dc4ad0896ba82c23298468b27d82a3187051b08e2b9b040110fe539a] <==
	I0919 22:31:10.097680       1 main.go:301] handling current node
	I0919 22:31:20.100298       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:20.100332       1 main.go:301] handling current node
	I0919 22:31:30.097619       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:30.097664       1 main.go:301] handling current node
	I0919 22:31:40.097308       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:40.097351       1 main.go:301] handling current node
	I0919 22:31:50.101061       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:31:50.101096       1 main.go:301] handling current node
	I0919 22:32:00.106491       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:00.106525       1 main.go:301] handling current node
	I0919 22:32:10.097636       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:10.097690       1 main.go:301] handling current node
	I0919 22:32:20.105997       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:20.106034       1 main.go:301] handling current node
	I0919 22:32:30.098373       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:30.098432       1 main.go:301] handling current node
	I0919 22:32:40.097362       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:40.097409       1 main.go:301] handling current node
	I0919 22:32:50.097277       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:32:50.097308       1 main.go:301] handling current node
	I0919 22:33:00.100175       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:00.100211       1 main.go:301] handling current node
	I0919 22:33:10.097524       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:33:10.097578       1 main.go:301] handling current node
	
	
	==> kube-apiserver [227c8e07530f4aadfb1fc4928ef771d27ec5cffef8eaae1ccf5e54e371d93195] <==
	I0919 22:23:01.710208       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.108.56.226"}
	I0919 22:23:10.622803       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.1.252"}
	I0919 22:23:13.507493       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.109.191.201"}
	I0919 22:23:20.875150       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.99.157.120"}
	E0919 22:23:24.390363       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:54620: use of closed network connection
	E0919 22:23:34.757579       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51564: use of closed network connection
	E0919 22:23:35.033841       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51574: use of closed network connection
	E0919 22:23:36.203500       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51592: use of closed network connection
	I0919 22:23:41.181443       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:23:54.530509       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:25:03.030475       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:25:23.820252       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:21.944817       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:26:42.231856       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:27:48.940071       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:03.238849       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:28:56.460125       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:29:19.154426       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:13.199715       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:30:27.687004       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:23.994038       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:31:52.437059       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:33.730628       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:32:35.943158       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:32:53.125269       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [1f5dbf8fcaad9efd2e473b8e39d1130d06a59b861c7011c5e338079b39d82976] <==
	I0919 22:22:37.100567       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-393395"
	I0919 22:22:37.100627       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:22:37.102584       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:22:37.103875       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0919 22:22:37.106167       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:22:37.108560       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:22:37.108817       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 22:22:37.109180       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0919 22:22:37.109180       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 22:22:37.110312       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 22:22:37.110410       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 22:22:37.110432       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I0919 22:22:37.110431       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:22:37.110451       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0919 22:22:37.110418       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 22:22:37.114913       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:22:37.126142       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:22:37.128412       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:22:37.132771       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 22:23:01.602490       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:23:01.607114       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:23:01.611296       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:23:01.615286       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:23:01.615319       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E0919 22:23:01.622144       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-controller-manager [ab7758d673f1284eeda2c7f1a1a53aedf6253beade02b3ab4a80d867bba191f2] <==
	I0919 22:22:20.512712       1 serving.go:386] Generated self-signed cert in-memory
	I0919 22:22:20.705060       1 controllermanager.go:191] "Starting" version="v1.34.0"
	I0919 22:22:20.705094       1 controllermanager.go:193] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:22:20.707594       1 dynamic_cafile_content.go:161] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0919 22:22:20.707597       1 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0919 22:22:20.707910       1 secure_serving.go:211] Serving securely on 127.0.0.1:10257
	I0919 22:22:20.707945       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E0919 22:22:30.710430       1 controllermanager.go:245] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.49.2:8441/healthz\": dial tcp 192.168.49.2:8441: connect: connection refused"
	
	
	==> kube-proxy [0910015b5fbd01cf4397fcb3fa1e2fccacef59df277778aa8b4f892b2f3c75cb] <==
	I0919 22:21:12.085918       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:21:12.147856       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:21:12.249294       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:21:12.250206       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:21:12.250365       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:21:12.277220       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:21:12.277294       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:21:12.283359       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:21:12.283838       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:21:12.283923       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:21:12.285905       1 config.go:200] "Starting service config controller"
	I0919 22:21:12.285923       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:21:12.285946       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:21:12.285952       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:21:12.285965       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:21:12.285970       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:21:12.286015       1 config.go:309] "Starting node config controller"
	I0919 22:21:12.286026       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:21:12.286032       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:21:12.386208       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:21:12.386226       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:21:12.386256       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [3451a960bc765f39ecd7d048d024301c5d3eb53d484edfccb9187782e559f28d] <==
	I0919 22:22:19.808446       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	E0919 22:22:19.810609       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-393395&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:22:21.041189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-393395&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:22:23.544765       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-393395&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:22:28.829360       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://control-plane.minikube.internal:8441/api/v1/nodes?fieldSelector=metadata.name%3Dfunctional-393395&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I0919 22:22:36.809462       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:22:36.809513       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:22:36.809604       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:22:36.828849       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:22:36.828911       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:22:36.834491       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:22:36.834873       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:22:36.834917       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:22:36.836092       1 config.go:200] "Starting service config controller"
	I0919 22:22:36.836122       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:22:36.836155       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:22:36.836161       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:22:36.836186       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:22:36.836212       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:22:36.836321       1 config.go:309] "Starting node config controller"
	I0919 22:22:36.836349       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:22:36.836357       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:22:36.937191       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:22:36.937317       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:22:36.937360       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [42aeb3629d38306259f19ed4d0d4a4c5814d2678ece932a2e5a31316a1942278] <==
	E0919 22:21:04.120276       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:21:04.120320       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:21:04.120377       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:21:04.120413       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:21:04.119673       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:21:04.120444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:21:04.120479       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:21:04.120555       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:21:04.120828       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 22:21:04.121094       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 22:21:04.121166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:21:04.121271       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:21:04.121353       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 22:21:04.121365       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:21:04.956241       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:21:05.002647       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:21:05.017836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 22:21:05.157813       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I0919 22:21:05.617384       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:22:19.330504       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I0919 22:22:19.330585       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:22:19.330741       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I0919 22:22:19.330771       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I0919 22:22:19.330778       1 server.go:265] "[graceful-termination] secure server is exiting"
	E0919 22:22:19.330798       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [d40f8efdb892c03db59853e4d368ab3daf474d9f4a9fbe5dbf56d82ac56d7777] <==
	E0919 22:22:25.371318       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:22:25.388018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:22:25.477962       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:22:25.487898       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 22:22:25.896876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:22:28.072582       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E0919 22:22:28.750247       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: Get \"https://192.168.49.2:8441/apis/apps/v1/statefulsets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E0919 22:22:28.881319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://192.168.49.2:8441/api/v1/services?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:22:28.919189       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/volumeattachments?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E0919 22:22:28.979794       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 22:22:29.091340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://192.168.49.2:8441/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:22:29.091373       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: Get \"https://192.168.49.2:8441/api/v1/persistentvolumes?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E0919 22:22:29.163636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: Get \"https://192.168.49.2:8441/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E0919 22:22:29.585018       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E0919 22:22:29.592619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceslices?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E0919 22:22:29.598167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: Get \"https://192.168.49.2:8441/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:22:29.669786       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: Get \"https://192.168.49.2:8441/api/v1/replicationcontrollers?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:22:30.141895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/deviceclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E0919 22:22:30.485497       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: Get \"https://192.168.49.2:8441/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-authentication&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:22:30.530899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: Get \"https://192.168.49.2:8441/apis/apps/v1/replicasets?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:22:30.588579       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://192.168.49.2:8441/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 22:22:30.677636       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: Get \"https://192.168.49.2:8441/api/v1/pods?fieldSelector=status.phase%21%3DSucceeded%2Cstatus.phase%21%3DFailed&limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E0919 22:22:31.188154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: Get \"https://192.168.49.2:8441/apis/resource.k8s.io/v1/resourceclaims?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E0919 22:22:31.283616       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: Get \"https://192.168.49.2:8441/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0\": dial tcp 192.168.49.2:8441: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	I0919 22:22:37.892679       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:32:05 functional-393395 kubelet[5291]: E0919 22:32:05.295520    5291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-kwkd9" podUID="6902568e-a174-4e3f-9c8c-200957cc0008"
	Sep 19 22:32:11 functional-393395 kubelet[5291]: E0919 22:32:11.427142    5291 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321131426869222  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:32:11 functional-393395 kubelet[5291]: E0919 22:32:11.427175    5291 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321131426869222  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:32:14 functional-393395 kubelet[5291]: E0919 22:32:14.296250    5291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-cc7xp" podUID="f103572c-81a5-4040-b7e8-02f1205d561a"
	Sep 19 22:32:16 functional-393395 kubelet[5291]: E0919 22:32:16.296212    5291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-kwkd9" podUID="6902568e-a174-4e3f-9c8c-200957cc0008"
	Sep 19 22:32:21 functional-393395 kubelet[5291]: E0919 22:32:21.429317    5291 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321141428934071  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:32:21 functional-393395 kubelet[5291]: E0919 22:32:21.429358    5291 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321141428934071  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:32:29 functional-393395 kubelet[5291]: E0919 22:32:29.296328    5291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-kwkd9" podUID="6902568e-a174-4e3f-9c8c-200957cc0008"
	Sep 19 22:32:29 functional-393395 kubelet[5291]: E0919 22:32:29.296359    5291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-cc7xp" podUID="f103572c-81a5-4040-b7e8-02f1205d561a"
	Sep 19 22:32:31 functional-393395 kubelet[5291]: E0919 22:32:31.431467    5291 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321151431201291  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:32:31 functional-393395 kubelet[5291]: E0919 22:32:31.431496    5291 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321151431201291  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:32:40 functional-393395 kubelet[5291]: E0919 22:32:40.296043    5291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-cc7xp" podUID="f103572c-81a5-4040-b7e8-02f1205d561a"
	Sep 19 22:32:41 functional-393395 kubelet[5291]: E0919 22:32:41.433611    5291 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321161433290329  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:32:41 functional-393395 kubelet[5291]: E0919 22:32:41.433656    5291 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321161433290329  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:32:42 functional-393395 kubelet[5291]: E0919 22:32:42.296167    5291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-kwkd9" podUID="6902568e-a174-4e3f-9c8c-200957cc0008"
	Sep 19 22:32:51 functional-393395 kubelet[5291]: E0919 22:32:51.435905    5291 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321171435620611  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:32:51 functional-393395 kubelet[5291]: E0919 22:32:51.435951    5291 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321171435620611  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:32:55 functional-393395 kubelet[5291]: E0919 22:32:55.296231    5291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-cc7xp" podUID="f103572c-81a5-4040-b7e8-02f1205d561a"
	Sep 19 22:32:57 functional-393395 kubelet[5291]: E0919 22:32:57.295548    5291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-kwkd9" podUID="6902568e-a174-4e3f-9c8c-200957cc0008"
	Sep 19 22:33:01 functional-393395 kubelet[5291]: E0919 22:33:01.438213    5291 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321181437852174  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:33:01 functional-393395 kubelet[5291]: E0919 22:33:01.438257    5291 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321181437852174  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:33:06 functional-393395 kubelet[5291]: E0919 22:33:06.296834    5291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-connect-7d85dfc575-cc7xp" podUID="f103572c-81a5-4040-b7e8-02f1205d561a"
	Sep 19 22:33:11 functional-393395 kubelet[5291]: E0919 22:33:11.439981    5291 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321191439670653  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:33:11 functional-393395 kubelet[5291]: E0919 22:33:11.440016    5291 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321191439670653  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:303433}  inodes_used:{value:134}}"
	Sep 19 22:33:12 functional-393395 kubelet[5291]: E0919 22:33:12.296089    5291 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short-name \\\"kicbase/echo-server:latest\\\" did not resolve to an alias and no unqualified-search registries are defined in \\\"/etc/containers/registries.conf\\\"\"" pod="default/hello-node-75c85bcc94-kwkd9" podUID="6902568e-a174-4e3f-9c8c-200957cc0008"
	
	
	==> kubernetes-dashboard [513bb8a7ba791de7f1d09c6d80fa744667429fa34ddeda9ac90d617744bb5664] <==
	2025/09/19 22:23:07 Starting overwatch
	2025/09/19 22:23:07 Using namespace: kubernetes-dashboard
	2025/09/19 22:23:07 Using in-cluster config to connect to apiserver
	2025/09/19 22:23:07 Using secret token for csrf signing
	2025/09/19 22:23:07 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 22:23:07 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 22:23:07 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 22:23:07 Generating JWE encryption key
	2025/09/19 22:23:07 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 22:23:07 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 22:23:07 Initializing JWE encryption key from synchronized object
	2025/09/19 22:23:07 Creating in-cluster Sidecar client
	2025/09/19 22:23:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 22:23:07 Serving insecurely on HTTP port: 9090
	2025/09/19 22:23:37 Successful request to sidecar
	
	
	==> storage-provisioner [ace6892425f10847fdc79632691948e950a939c22311f75d381cfa77e0dfd1e2] <==
	I0919 22:22:19.677624       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 22:22:19.684681       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	
	
	==> storage-provisioner [defa3ebf1cb4246b415a548ab97cb7541e8c40ff1886950a8bc55c95de0fdf17] <==
	W0919 22:32:50.531167       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:32:52.534130       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:32:52.538933       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:32:54.542014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:32:54.545860       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:32:56.548869       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:32:56.554193       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:32:58.558463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:32:58.563201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:00.566296       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:00.570513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:02.573640       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:02.578711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:04.581688       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:04.587573       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:06.591299       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:06.595369       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:08.598498       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:08.602351       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:10.605792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:10.609856       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:12.613908       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:12.621442       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:14.624533       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 22:33:14.629134       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-393395 -n functional-393395
helpers_test.go:269: (dbg) Run:  kubectl --context functional-393395 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-kwkd9 hello-node-connect-7d85dfc575-cc7xp
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-393395 describe pod busybox-mount hello-node-75c85bcc94-kwkd9 hello-node-connect-7d85dfc575-cc7xp
helpers_test.go:290: (dbg) kubectl --context functional-393395 describe pod busybox-mount hello-node-75c85bcc94-kwkd9 hello-node-connect-7d85dfc575-cc7xp:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-393395/192.168.49.2
	Start Time:       Fri, 19 Sep 2025 22:23:01 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  mount-munger:
	    Container ID:  cri-o://608bbf44676659ac958d7087783ae40baef21087d851c29f3234f286782ca6f5
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Fri, 19 Sep 2025 22:23:04 +0000
	      Finished:     Fri, 19 Sep 2025 22:23:04 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-t2q7m (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-t2q7m:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age   From               Message
	  ----    ------     ----  ----               -------
	  Normal  Scheduled  10m   default-scheduler  Successfully assigned default/busybox-mount to functional-393395
	  Normal  Pulling    10m   kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     10m   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 2.526s (2.526s including waiting). Image size: 4631262 bytes.
	  Normal  Created    10m   kubelet            Created container: mount-munger
	  Normal  Started    10m   kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-kwkd9
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-393395/192.168.49.2
	Start Time:       Fri, 19 Sep 2025 22:22:58 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.4
	IPs:
	  IP:           10.244.0.4
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lr8mk (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-lr8mk:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-kwkd9 to functional-393395
	  Normal   Pulling    7m19s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m19s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m19s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    4s (x44 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4s (x44 over 10m)    kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-cc7xp
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-393395/192.168.49.2
	Start Time:       Fri, 19 Sep 2025 22:23:13 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.9
	IPs:
	  IP:           10.244.0.9
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2lldd (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2lldd:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-cc7xp to functional-393395
	  Normal   Pulling    7m4s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
	  Warning  Failed     7m4s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     4m59s (x20 over 10m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m46s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (603.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-393395 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-393395 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-kwkd9" [6902568e-a174-4e3f-9c8c-200957cc0008] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-393395 -n functional-393395
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-09-19 22:32:58.925124536 +0000 UTC m=+1137.633785505
functional_test.go:1460: (dbg) Run:  kubectl --context functional-393395 describe po hello-node-75c85bcc94-kwkd9 -n default
functional_test.go:1460: (dbg) kubectl --context functional-393395 describe po hello-node-75c85bcc94-kwkd9 -n default:
Name:             hello-node-75c85bcc94-kwkd9
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-393395/192.168.49.2
Start Time:       Fri, 19 Sep 2025 22:22:58 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.4
IPs:
IP:           10.244.0.4
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-lr8mk (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-lr8mk:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-kwkd9 to functional-393395
Normal   Pulling    7m1s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m1s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short-name "kicbase/echo-server:latest" did not resolve to an alias and no unqualified-search registries are defined in "/etc/containers/registries.conf"
Warning  Failed     7m1s (x5 over 10m)      kubelet            Error: ErrImagePull
Normal   BackOff    4m47s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m47s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-393395 logs hello-node-75c85bcc94-kwkd9 -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-393395 logs hello-node-75c85bcc94-kwkd9 -n default: exit status 1 (73.001844ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-kwkd9" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-393395 logs hello-node-75c85bcc94-kwkd9 -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 service --namespace=default --https --url hello-node: exit status 115 (534.121593ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:31398
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-393395 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 service hello-node --url --format={{.IP}}: exit status 115 (535.508655ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-393395 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 service hello-node --url: exit status 115 (523.75119ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:31398
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-393395 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:31398
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (30.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 node add --alsologtostderr -v 5: exit status 80 (28.739741266s)

                                                
                                                
-- stdout --
	* Adding node m04 to cluster ha-984158 as [worker]
	* Starting "ha-984158-m04" worker node in "ha-984158" cluster
	* Pulling base image v0.0.48 ...
	* Stopping node "ha-984158-m04"  ...
	* Deleting "ha-984158-m04" in docker ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:35:19.254088   78035 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:35:19.254387   78035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:35:19.254399   78035 out.go:374] Setting ErrFile to fd 2...
	I0919 22:35:19.254403   78035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:35:19.254578   78035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:35:19.254848   78035 mustload.go:65] Loading cluster: ha-984158
	I0919 22:35:19.255262   78035 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:35:19.255669   78035 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:35:19.274525   78035 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:35:19.274788   78035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:35:19.331576   78035 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:35:19.321625352 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:35:19.331905   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:35:19.349784   78035 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:35:19.350276   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:35:19.371599   78035 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:35:19.371894   78035 api_server.go:166] Checking apiserver status ...
	I0919 22:35:19.371949   78035 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:19.372002   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:35:19.391025   78035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:35:19.492435   78035 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:35:19.503503   78035 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:35:19.503576   78035 ssh_runner.go:195] Run: ls
	I0919 22:35:19.507848   78035 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:35:19.512287   78035 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:35:19.514279   78035 out.go:179] * Adding node m04 to cluster ha-984158 as [worker]
	I0919 22:35:19.515667   78035 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:35:19.515783   78035 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:35:19.517290   78035 out.go:179] * Starting "ha-984158-m04" worker node in "ha-984158" cluster
	I0919 22:35:19.518411   78035 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:35:19.519620   78035 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:35:19.520887   78035 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:35:19.520940   78035 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:35:19.520947   78035 cache.go:58] Caching tarball of preloaded images
	I0919 22:35:19.521007   78035 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:35:19.521058   78035 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:35:19.521070   78035 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:35:19.521213   78035 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:35:19.544146   78035 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:35:19.544166   78035 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:35:19.544182   78035 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:35:19.544205   78035 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:35:19.544303   78035 start.go:364] duration metric: took 81.037µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:35:19.544329   78035 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0919 22:35:19.544434   78035 start.go:125] createHost starting for "m04" (driver="docker")
	I0919 22:35:19.546417   78035 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:35:19.546521   78035 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:35:19.546549   78035 client.go:168] LocalClient.Create starting
	I0919 22:35:19.546612   78035 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:35:19.546645   78035 main.go:141] libmachine: Decoding PEM data...
	I0919 22:35:19.546663   78035 main.go:141] libmachine: Parsing certificate...
	I0919 22:35:19.546732   78035 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:35:19.546766   78035 main.go:141] libmachine: Decoding PEM data...
	I0919 22:35:19.546783   78035 main.go:141] libmachine: Parsing certificate...
	I0919 22:35:19.547079   78035 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:35:19.564416   78035 network_create.go:77] Found existing network {name:ha-984158 subnet:0xc0014f2420 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:35:19.564453   78035 kic.go:121] calculated static IP "192.168.49.5" for the "ha-984158-m04" container
	I0919 22:35:19.564518   78035 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:35:19.582658   78035 cli_runner.go:164] Run: docker volume create ha-984158-m04 --label name.minikube.sigs.k8s.io=ha-984158-m04 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:35:19.602189   78035 oci.go:103] Successfully created a docker volume ha-984158-m04
	I0919 22:35:19.602265   78035 cli_runner.go:164] Run: docker run --rm --name ha-984158-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m04 --entrypoint /usr/bin/test -v ha-984158-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:35:19.982411   78035 oci.go:107] Successfully prepared a docker volume ha-984158-m04
	I0919 22:35:19.982470   78035 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:35:19.982493   78035 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:35:19.982567   78035 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:35:24.295470   78035 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.31283365s)
	I0919 22:35:24.295520   78035 kic.go:203] duration metric: took 4.313022523s to extract preloaded images to volume ...
	W0919 22:35:24.295651   78035 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:35:24.295696   78035 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:35:24.295748   78035 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:35:24.355888   78035 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158-m04 --name ha-984158-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158-m04 --network ha-984158 --ip 192.168.49.5 --volume ha-984158-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:35:24.673323   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Running}}
	I0919 22:35:24.693887   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:35:24.715871   78035 cli_runner.go:164] Run: docker exec ha-984158-m04 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:35:24.762838   78035 oci.go:144] the created container "ha-984158-m04" has a running status.
	I0919 22:35:24.762869   78035 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa...
	I0919 22:35:24.970506   78035 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:35:24.970576   78035 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:35:25.224582   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:35:25.244672   78035 cli_runner.go:164] Run: docker inspect ha-984158-m04
	I0919 22:35:25.263835   78035 errors.go:84] Postmortem inspect ("docker inspect ha-984158-m04"): -- stdout --
	[
	    {
	        "Id": "4b1bc0ac598e8e429a82ba5b0c0b9fcea9c10fdde8b236cf50183f8904cb3c1c",
	        "Created": "2025-09-19T22:35:24.375747367Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:35:24.427906845Z",
	            "FinishedAt": "2025-09-19T22:35:24.869806339Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/4b1bc0ac598e8e429a82ba5b0c0b9fcea9c10fdde8b236cf50183f8904cb3c1c/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/4b1bc0ac598e8e429a82ba5b0c0b9fcea9c10fdde8b236cf50183f8904cb3c1c/hostname",
	        "HostsPath": "/var/lib/docker/containers/4b1bc0ac598e8e429a82ba5b0c0b9fcea9c10fdde8b236cf50183f8904cb3c1c/hosts",
	        "LogPath": "/var/lib/docker/containers/4b1bc0ac598e8e429a82ba5b0c0b9fcea9c10fdde8b236cf50183f8904cb3c1c/4b1bc0ac598e8e429a82ba5b0c0b9fcea9c10fdde8b236cf50183f8904cb3c1c-json.log",
	        "Name": "/ha-984158-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-984158-m04:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-984158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "4b1bc0ac598e8e429a82ba5b0c0b9fcea9c10fdde8b236cf50183f8904cb3c1c",
	                "LowerDir": "/var/lib/docker/overlay2/ad65a032bf2446cc9a1dbf67451efd3ffbb147bf4d6ff11dcc5fe1ed779eee41-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/ad65a032bf2446cc9a1dbf67451efd3ffbb147bf4d6ff11dcc5fe1ed779eee41/merged",
	                "UpperDir": "/var/lib/docker/overlay2/ad65a032bf2446cc9a1dbf67451efd3ffbb147bf4d6ff11dcc5fe1ed779eee41/diff",
	                "WorkDir": "/var/lib/docker/overlay2/ad65a032bf2446cc9a1dbf67451efd3ffbb147bf4d6ff11dcc5fe1ed779eee41/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-984158-m04",
	                "Source": "/var/lib/docker/volumes/ha-984158-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-984158-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-984158-m04",
	                "name.minikube.sigs.k8s.io": "ha-984158-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-984158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1b6c79ac61dbabfd8f1ce8959ab9a2616212ddaf4680b1bb2cc7b6f6005d0e",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-984158-m04",
	                        "4b1bc0ac598e"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0919 22:35:25.263914   78035 cli_runner.go:164] Run: docker logs --timestamps --details ha-984158-m04
	I0919 22:35:25.285212   78035 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-984158-m04"): -- stdout --
	2025-09-19T22:35:24.666243307Z  + userns=
	2025-09-19T22:35:24.666272461Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-19T22:35:24.668549875Z  + validate_userns
	2025-09-19T22:35:24.668565123Z  + [[ -z '' ]]
	2025-09-19T22:35:24.668567754Z  + return
	2025-09-19T22:35:24.668569608Z  + configure_containerd
	2025-09-19T22:35:24.668571389Z  + local snapshotter=
	2025-09-19T22:35:24.668573147Z  + [[ -n '' ]]
	2025-09-19T22:35:24.668574743Z  + [[ -z '' ]]
	2025-09-19T22:35:24.669060490Z  ++ stat -f -c %T /kind
	2025-09-19T22:35:24.670232449Z  + container_filesystem=overlayfs
	2025-09-19T22:35:24.670251046Z  + [[ overlayfs == \z\f\s ]]
	2025-09-19T22:35:24.670254977Z  + [[ -n '' ]]
	2025-09-19T22:35:24.670257829Z  + configure_proxy
	2025-09-19T22:35:24.670260539Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-19T22:35:24.677738018Z  + [[ ! -z '' ]]
	2025-09-19T22:35:24.677762545Z  + cat
	2025-09-19T22:35:24.679117980Z  + fix_mount
	2025-09-19T22:35:24.679135367Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-19T22:35:24.679185342Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-19T22:35:24.679657243Z  ++ which mount
	2025-09-19T22:35:24.681290334Z  ++ which umount
	2025-09-19T22:35:24.682171627Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-19T22:35:24.689436598Z  ++ which mount
	2025-09-19T22:35:24.690905187Z  ++ which umount
	2025-09-19T22:35:24.691926238Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-19T22:35:24.693819131Z  +++ which mount
	2025-09-19T22:35:24.695031938Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-19T22:35:24.696115049Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-19T22:35:24.696162177Z  + echo 'INFO: remounting /sys read-only'
	2025-09-19T22:35:24.696166767Z  INFO: remounting /sys read-only
	2025-09-19T22:35:24.696170238Z  + mount -o remount,ro /sys
	2025-09-19T22:35:24.698015241Z  + echo 'INFO: making mounts shared'
	2025-09-19T22:35:24.698034988Z  INFO: making mounts shared
	2025-09-19T22:35:24.698037950Z  + mount --make-rshared /
	2025-09-19T22:35:24.699515671Z  + retryable_fix_cgroup
	2025-09-19T22:35:24.699836273Z  ++ seq 0 10
	2025-09-19T22:35:24.701001194Z  + for i in $(seq 0 10)
	2025-09-19T22:35:24.701019451Z  + fix_cgroup
	2025-09-19T22:35:24.701022741Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-19T22:35:24.701084536Z  + echo 'INFO: detected cgroup v2'
	2025-09-19T22:35:24.701089228Z  INFO: detected cgroup v2
	2025-09-19T22:35:24.701131665Z  + return
	2025-09-19T22:35:24.701139173Z  + return
	2025-09-19T22:35:24.701141817Z  + fix_machine_id
	2025-09-19T22:35:24.701144156Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-19T22:35:24.701146773Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-19T22:35:24.701149147Z  + rm -f /etc/machine-id
	2025-09-19T22:35:24.702507323Z  + systemd-machine-id-setup
	2025-09-19T22:35:24.706079946Z  Initializing machine ID from random generator.
	2025-09-19T22:35:24.708789141Z  + fix_product_name
	2025-09-19T22:35:24.709444770Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-19T22:35:24.709457882Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-19T22:35:24.709461951Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-19T22:35:24.709464904Z  + echo kind
	2025-09-19T22:35:24.710445231Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-19T22:35:24.712667606Z  + fix_product_uuid
	2025-09-19T22:35:24.712698118Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-19T22:35:24.712701877Z  + cat /proc/sys/kernel/random/uuid
	2025-09-19T22:35:24.713926461Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-19T22:35:24.713945151Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-19T22:35:24.713948960Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-19T22:35:24.713952516Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-19T22:35:24.715487001Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-19T22:35:24.715501601Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-19T22:35:24.715504318Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-19T22:35:24.715506203Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-19T22:35:24.717158599Z  + select_iptables
	2025-09-19T22:35:24.717176483Z  + local mode num_legacy_lines num_nft_lines
	2025-09-19T22:35:24.718261850Z  ++ grep -c '^-'
	2025-09-19T22:35:24.721010815Z  ++ true
	2025-09-19T22:35:24.721292497Z  + num_legacy_lines=0
	2025-09-19T22:35:24.722412152Z  ++ grep -c '^-'
	2025-09-19T22:35:24.727972518Z  + num_nft_lines=6
	2025-09-19T22:35:24.727996852Z  + '[' 0 -ge 6 ']'
	2025-09-19T22:35:24.728001070Z  + mode=nft
	2025-09-19T22:35:24.728003910Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-19T22:35:24.728007370Z  INFO: setting iptables to detected mode: nft
	2025-09-19T22:35:24.728010215Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:35:24.728122529Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:35:24.728140964Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:35:24.728564683Z  ++ seq 0 15
	2025-09-19T22:35:24.729283697Z  + for i in $(seq 0 15)
	2025-09-19T22:35:24.729294461Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:35:24.732548786Z  + return
	2025-09-19T22:35:24.732567439Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:35:24.732570939Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:35:24.732574313Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:35:24.733059423Z  ++ seq 0 15
	2025-09-19T22:35:24.734009539Z  + for i in $(seq 0 15)
	2025-09-19T22:35:24.734023360Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:35:24.737154720Z  + return
	2025-09-19T22:35:24.737232645Z  + enable_network_magic
	2025-09-19T22:35:24.737284905Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-19T22:35:24.737292195Z  + local docker_host_ip
	2025-09-19T22:35:24.738533542Z  ++ cut '-d ' -f1
	2025-09-19T22:35:24.738549457Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:35:24.738553145Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-19T22:35:24.828467720Z  + docker_host_ip=
	2025-09-19T22:35:24.828495936Z  + [[ -z '' ]]
	2025-09-19T22:35:24.829295139Z  ++ ip -4 route show default
	2025-09-19T22:35:24.829442832Z  ++ cut '-d ' -f3
	2025-09-19T22:35:24.831747257Z  + docker_host_ip=192.168.49.1
	2025-09-19T22:35:24.832095592Z  + iptables-save
	2025-09-19T22:35:24.832605759Z  + iptables-restore
	2025-09-19T22:35:24.834844583Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-19T22:35:24.844954373Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-19T22:35:24.846906348Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-19T22:35:24.848148388Z  + replaced='# Generated by Docker Engine.
	2025-09-19T22:35:24.848166636Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:35:24.848170473Z  # has been modified.
	2025-09-19T22:35:24.848172600Z  
	2025-09-19T22:35:24.848174526Z  nameserver 192.168.49.1
	2025-09-19T22:35:24.848176386Z  search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:35:24.848178240Z  options edns0 trust-ad ndots:0
	2025-09-19T22:35:24.848189375Z  
	2025-09-19T22:35:24.848191160Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:35:24.848193078Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:35:24.848195683Z  # Overrides: []
	2025-09-19T22:35:24.848198389Z  # Option ndots from: internal'
	2025-09-19T22:35:24.848201037Z  + [[ '' == '' ]]
	2025-09-19T22:35:24.848203713Z  + echo '# Generated by Docker Engine.
	2025-09-19T22:35:24.848206818Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:35:24.848209558Z  # has been modified.
	2025-09-19T22:35:24.848212023Z  
	2025-09-19T22:35:24.848214844Z  nameserver 192.168.49.1
	2025-09-19T22:35:24.848217368Z  search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:35:24.848220004Z  options edns0 trust-ad ndots:0
	2025-09-19T22:35:24.848222698Z  
	2025-09-19T22:35:24.848225364Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:35:24.848228185Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:35:24.848230617Z  # Overrides: []
	2025-09-19T22:35:24.848233298Z  # Option ndots from: internal'
	2025-09-19T22:35:24.848286806Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-19T22:35:24.848292184Z  + local files_to_update
	2025-09-19T22:35:24.848294998Z  + local should_fix_certificate=false
	2025-09-19T22:35:24.849475041Z  ++ cut '-d ' -f1
	2025-09-19T22:35:24.849496973Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:35:24.850001597Z  ++++ hostname
	2025-09-19T22:35:24.850751600Z  +++ timeout 5 getent ahostsv4 ha-984158-m04
	2025-09-19T22:35:24.853415856Z  + curr_ipv4=192.168.49.5
	2025-09-19T22:35:24.853435599Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-19T22:35:24.853439656Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-19T22:35:24.853442777Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-19T22:35:24.853497887Z  + [[ -n 192.168.49.5 ]]
	2025-09-19T22:35:24.853510270Z  + echo -n 192.168.49.5
	2025-09-19T22:35:24.854718809Z  ++ cut '-d ' -f1
	2025-09-19T22:35:24.854736155Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:35:24.855229666Z  ++++ hostname
	2025-09-19T22:35:24.856150017Z  +++ timeout 5 getent ahostsv6 ha-984158-m04
	2025-09-19T22:35:24.858745457Z  + curr_ipv6=
	2025-09-19T22:35:24.858761562Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-19T22:35:24.858777480Z  INFO: Detected IPv6 address: 
	2025-09-19T22:35:24.858780603Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-19T22:35:24.858783280Z  + [[ -n '' ]]
	2025-09-19T22:35:24.858790943Z  + false
	2025-09-19T22:35:24.859337248Z  ++ uname -a
	2025-09-19T22:35:24.860139803Z  + echo 'entrypoint completed: Linux ha-984158-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-19T22:35:24.860157368Z  entrypoint completed: Linux ha-984158-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-19T22:35:24.860161825Z  + exec /sbin/init
	2025-09-19T22:35:24.866593926Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-19T22:35:24.866617975Z  Detected virtualization docker.
	2025-09-19T22:35:24.866622026Z  Detected architecture x86-64.
	2025-09-19T22:35:24.866692636Z  
	2025-09-19T22:35:24.866697539Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-19T22:35:24.866701126Z  
	2025-09-19T22:35:24.867164348Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:35:24.867182316Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:35:24.867186631Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:35:24.867189976Z  Exiting PID 1...
	
	-- /stdout --
	I0919 22:35:25.285321   78035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:35:25.340317   78035 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:35:25.330286098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:35:25.340398   78035 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:35:25.330286098 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux A
rchitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:fals
e Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:35:25.340486   78035 network_create.go:284] running [docker network inspect ha-984158-m04] to gather additional debugging logs...
	I0919 22:35:25.340503   78035 cli_runner.go:164] Run: docker network inspect ha-984158-m04
	W0919 22:35:25.359223   78035 cli_runner.go:211] docker network inspect ha-984158-m04 returned with exit code 1
	I0919 22:35:25.359267   78035 network_create.go:287] error running [docker network inspect ha-984158-m04]: docker network inspect ha-984158-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-984158-m04 not found
	I0919 22:35:25.359291   78035 network_create.go:289] output of [docker network inspect ha-984158-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-984158-m04 not found
	
	** /stderr **
	I0919 22:35:25.359371   78035 client.go:171] duration metric: took 5.812814143s to LocalClient.Create
	I0919 22:35:27.360293   78035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:35:27.360338   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:27.378637   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:27.378762   78035 retry.go:31] will retry after 226.207347ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:27.605198   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:27.623830   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:27.623928   78035 retry.go:31] will retry after 505.696948ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:28.130379   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:28.151298   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:28.151412   78035 retry.go:31] will retry after 603.438984ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:28.755128   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:28.774771   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:35:28.774899   78035 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:35:28.774919   78035 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:28.774963   78035 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:35:28.775004   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:28.794167   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:28.794278   78035 retry.go:31] will retry after 262.089436ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:29.056719   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:29.074704   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:29.074793   78035 retry.go:31] will retry after 500.812365ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:29.576307   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:29.595196   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:29.595321   78035 retry.go:31] will retry after 590.202246ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:30.185862   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:30.204407   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:35:30.204534   78035 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:35:30.204549   78035 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:30.204558   78035 start.go:128] duration metric: took 10.660117546s to createHost
	I0919 22:35:30.204567   78035 start.go:83] releasing machines lock for "ha-984158-m04", held for 10.660253728s
	W0919 22:35:30.204586   78035 start.go:714] error starting host: creating host: create: creating: prepare kic ssh: container name "ha-984158-m04" state Stopped: log: 2025-09-19T22:35:24.867164348Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:35:24.867182316Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:35:24.867186631Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:35:24.867189976Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:35:30.204934   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:35:30.222206   78035 stop.go:39] StopHost: ha-984158-m04
	W0919 22:35:30.222464   78035 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0919 22:35:30.224510   78035 out.go:179] * Stopping node "ha-984158-m04"  ...
	I0919 22:35:30.226312   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:35:30.245906   78035 stop.go:87] host is in state Stopped
	I0919 22:35:30.245975   78035 main.go:141] libmachine: Stopping "ha-984158-m04"...
	I0919 22:35:30.246038   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:35:30.263734   78035 stop.go:66] stop err: Machine "ha-984158-m04" is already stopped.
	I0919 22:35:30.263770   78035 stop.go:69] host is already stopped
	W0919 22:35:31.264359   78035 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I0919 22:35:31.266729   78035 out.go:179] * Deleting "ha-984158-m04" in docker ...
	I0919 22:35:31.268171   78035 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-984158-m04
	I0919 22:35:31.287221   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:35:31.305606   78035 cli_runner.go:164] Run: docker exec --privileged -t ha-984158-m04 /bin/bash -c "sudo init 0"
	W0919 22:35:31.323984   78035 cli_runner.go:211] docker exec --privileged -t ha-984158-m04 /bin/bash -c "sudo init 0" returned with exit code 1
	I0919 22:35:31.324022   78035 oci.go:659] error shutdown ha-984158-m04: docker exec --privileged -t ha-984158-m04 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 4b1bc0ac598e8e429a82ba5b0c0b9fcea9c10fdde8b236cf50183f8904cb3c1c is not running
	I0919 22:35:32.324320   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:35:32.344759   78035 oci.go:667] container ha-984158-m04 status is Stopped
	I0919 22:35:32.344784   78035 oci.go:679] Successfully shutdown container ha-984158-m04
	I0919 22:35:32.344827   78035 cli_runner.go:164] Run: docker rm -f -v ha-984158-m04
	I0919 22:35:32.368527   78035 cli_runner.go:164] Run: docker container inspect -f {{.Id}} ha-984158-m04
	W0919 22:35:32.387035   78035 cli_runner.go:211] docker container inspect -f {{.Id}} ha-984158-m04 returned with exit code 1
	I0919 22:35:32.387130   78035 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:35:32.407002   78035 cli_runner.go:164] Run: docker network rm ha-984158
	W0919 22:35:32.424367   78035 cli_runner.go:211] docker network rm ha-984158 returned with exit code 1
	W0919 22:35:32.424455   78035 kic.go:390] failed to remove network (which might be okay) ha-984158: unable to delete a network that is attached to a running container
	W0919 22:35:32.424680   78035 out.go:285] ! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-984158-m04" state Stopped: log: 2025-09-19T22:35:24.867164348Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:35:24.867182316Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:35:24.867186631Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:35:24.867189976Z  Exiting PID 1...: container exited unexpectedly
	! StartHost failed, but will try again: creating host: create: creating: prepare kic ssh: container name "ha-984158-m04" state Stopped: log: 2025-09-19T22:35:24.867164348Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:35:24.867182316Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:35:24.867186631Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:35:24.867189976Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:35:32.424701   78035 start.go:729] Will try again in 5 seconds ...
	I0919 22:35:37.427213   78035 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:35:37.427363   78035 start.go:364] duration metric: took 73.237µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:35:37.427405   78035 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:fal
se inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Dis
ableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0919 22:35:37.427505   78035 start.go:125] createHost starting for "m04" (driver="docker")
	I0919 22:35:37.429570   78035 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:35:37.429684   78035 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:35:37.429716   78035 client.go:168] LocalClient.Create starting
	I0919 22:35:37.429772   78035 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:35:37.429804   78035 main.go:141] libmachine: Decoding PEM data...
	I0919 22:35:37.429817   78035 main.go:141] libmachine: Parsing certificate...
	I0919 22:35:37.429863   78035 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:35:37.429880   78035 main.go:141] libmachine: Decoding PEM data...
	I0919 22:35:37.429891   78035 main.go:141] libmachine: Parsing certificate...
	I0919 22:35:37.430093   78035 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:35:37.448824   78035 network_create.go:77] Found existing network {name:ha-984158 subnet:0xc001874f60 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:35:37.448865   78035 kic.go:121] calculated static IP "192.168.49.5" for the "ha-984158-m04" container
	I0919 22:35:37.448927   78035 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:35:37.467285   78035 cli_runner.go:164] Run: docker volume create ha-984158-m04 --label name.minikube.sigs.k8s.io=ha-984158-m04 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:35:37.485999   78035 oci.go:103] Successfully created a docker volume ha-984158-m04
	I0919 22:35:37.486167   78035 cli_runner.go:164] Run: docker run --rm --name ha-984158-m04-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m04 --entrypoint /usr/bin/test -v ha-984158-m04:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:35:37.759715   78035 oci.go:107] Successfully prepared a docker volume ha-984158-m04
	I0919 22:35:37.759758   78035 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:35:37.759780   78035 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:35:37.759858   78035 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:35:42.224926   78035 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m04:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.465013953s)
	I0919 22:35:42.225002   78035 kic.go:203] duration metric: took 4.465216655s to extract preloaded images to volume ...
	W0919 22:35:42.225162   78035 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:35:42.225220   78035 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:35:42.225484   78035 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:35:42.282588   78035 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158-m04 --name ha-984158-m04 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m04 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158-m04 --network ha-984158 --ip 192.168.49.5 --volume ha-984158-m04:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:35:42.586985   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Running}}
	I0919 22:35:42.605684   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:35:42.626902   78035 cli_runner.go:164] Run: docker exec ha-984158-m04 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:35:42.679498   78035 oci.go:144] the created container "ha-984158-m04" has a running status.
	I0919 22:35:42.679538   78035 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa...
	I0919 22:35:42.912049   78035 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:35:42.912120   78035 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:35:43.077012   78035 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:35:43.098676   78035 cli_runner.go:164] Run: docker inspect ha-984158-m04
	I0919 22:35:43.116733   78035 errors.go:84] Postmortem inspect ("docker inspect ha-984158-m04"): -- stdout --
	[
	    {
	        "Id": "7d2006ca3944358d45dd843a095d7968d9a67b2430411f08969d4cadcbde4e48",
	        "Created": "2025-09-19T22:35:42.298695323Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "exited",
	            "Running": false,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 0,
	            "ExitCode": 255,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:35:42.346371963Z",
	            "FinishedAt": "2025-09-19T22:35:42.737113649Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/7d2006ca3944358d45dd843a095d7968d9a67b2430411f08969d4cadcbde4e48/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/7d2006ca3944358d45dd843a095d7968d9a67b2430411f08969d4cadcbde4e48/hostname",
	        "HostsPath": "/var/lib/docker/containers/7d2006ca3944358d45dd843a095d7968d9a67b2430411f08969d4cadcbde4e48/hosts",
	        "LogPath": "/var/lib/docker/containers/7d2006ca3944358d45dd843a095d7968d9a67b2430411f08969d4cadcbde4e48/7d2006ca3944358d45dd843a095d7968d9a67b2430411f08969d4cadcbde4e48-json.log",
	        "Name": "/ha-984158-m04",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-984158-m04:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-984158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "7d2006ca3944358d45dd843a095d7968d9a67b2430411f08969d4cadcbde4e48",
	                "LowerDir": "/var/lib/docker/overlay2/1c9c419647d065fb213b72025997a3e8823d36dfee0421a483e60303bfbd4c28-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1c9c419647d065fb213b72025997a3e8823d36dfee0421a483e60303bfbd4c28/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1c9c419647d065fb213b72025997a3e8823d36dfee0421a483e60303bfbd4c28/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1c9c419647d065fb213b72025997a3e8823d36dfee0421a483e60303bfbd4c28/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-984158-m04",
	                "Source": "/var/lib/docker/volumes/ha-984158-m04/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-984158-m04",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-984158-m04",
	                "name.minikube.sigs.k8s.io": "ha-984158-m04",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "",
	            "SandboxKey": "",
	            "Ports": {},
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-984158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.5"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1b6c79ac61dbabfd8f1ce8959ab9a2616212ddaf4680b1bb2cc7b6f6005d0e",
	                    "EndpointID": "",
	                    "Gateway": "",
	                    "IPAddress": "",
	                    "IPPrefixLen": 0,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-984158-m04",
	                        "7d2006ca3944"
	                    ]
	                }
	            }
	        }
	    }
	]
	
	-- /stdout --
	I0919 22:35:43.116810   78035 cli_runner.go:164] Run: docker logs --timestamps --details ha-984158-m04
	I0919 22:35:43.139903   78035 errors.go:91] Postmortem logs ("docker logs --timestamps --details ha-984158-m04"): -- stdout --
	2025-09-19T22:35:42.580133359Z  + userns=
	2025-09-19T22:35:42.580171424Z  + grep -Eqv '0[[:space:]]+0[[:space:]]+4294967295' /proc/self/uid_map
	2025-09-19T22:35:42.582924881Z  + validate_userns
	2025-09-19T22:35:42.583141243Z  + [[ -z '' ]]
	2025-09-19T22:35:42.583153864Z  + return
	2025-09-19T22:35:42.583157199Z  + configure_containerd
	2025-09-19T22:35:42.583159912Z  + local snapshotter=
	2025-09-19T22:35:42.583162309Z  + [[ -n '' ]]
	2025-09-19T22:35:42.583163972Z  + [[ -z '' ]]
	2025-09-19T22:35:42.583409318Z  ++ stat -f -c %T /kind
	2025-09-19T22:35:42.584795618Z  + container_filesystem=overlayfs
	2025-09-19T22:35:42.584814684Z  + [[ overlayfs == \z\f\s ]]
	2025-09-19T22:35:42.584818359Z  + [[ -n '' ]]
	2025-09-19T22:35:42.584821251Z  + configure_proxy
	2025-09-19T22:35:42.584858128Z  + mkdir -p /etc/systemd/system.conf.d/
	2025-09-19T22:35:42.588968595Z  + [[ ! -z '' ]]
	2025-09-19T22:35:42.588987847Z  + cat
	2025-09-19T22:35:42.590213166Z  + fix_mount
	2025-09-19T22:35:42.590232636Z  + echo 'INFO: ensuring we can execute mount/umount even with userns-remap'
	2025-09-19T22:35:42.590286633Z  INFO: ensuring we can execute mount/umount even with userns-remap
	2025-09-19T22:35:42.590793314Z  ++ which mount
	2025-09-19T22:35:42.592431965Z  ++ which umount
	2025-09-19T22:35:42.593697660Z  + chown root:root /usr/bin/mount /usr/bin/umount
	2025-09-19T22:35:42.600839154Z  ++ which mount
	2025-09-19T22:35:42.602285578Z  ++ which umount
	2025-09-19T22:35:42.603344724Z  + chmod -s /usr/bin/mount /usr/bin/umount
	2025-09-19T22:35:42.605069434Z  +++ which mount
	2025-09-19T22:35:42.606093799Z  ++ stat -f -c %T /usr/bin/mount
	2025-09-19T22:35:42.607786849Z  + [[ overlayfs == \a\u\f\s ]]
	2025-09-19T22:35:42.607810061Z  + echo 'INFO: remounting /sys read-only'
	2025-09-19T22:35:42.607814147Z  INFO: remounting /sys read-only
	2025-09-19T22:35:42.607817225Z  + mount -o remount,ro /sys
	2025-09-19T22:35:42.609567492Z  + echo 'INFO: making mounts shared'
	2025-09-19T22:35:42.609588969Z  INFO: making mounts shared
	2025-09-19T22:35:42.609592537Z  + mount --make-rshared /
	2025-09-19T22:35:42.611231113Z  + retryable_fix_cgroup
	2025-09-19T22:35:42.611637532Z  ++ seq 0 10
	2025-09-19T22:35:42.612471275Z  + for i in $(seq 0 10)
	2025-09-19T22:35:42.612487341Z  + fix_cgroup
	2025-09-19T22:35:42.612491195Z  + [[ -f /sys/fs/cgroup/cgroup.controllers ]]
	2025-09-19T22:35:42.612493962Z  + echo 'INFO: detected cgroup v2'
	2025-09-19T22:35:42.612496718Z  INFO: detected cgroup v2
	2025-09-19T22:35:42.612514235Z  + return
	2025-09-19T22:35:42.612545313Z  + return
	2025-09-19T22:35:42.612550852Z  + fix_machine_id
	2025-09-19T22:35:42.612553408Z  + echo 'INFO: clearing and regenerating /etc/machine-id'
	2025-09-19T22:35:42.612578146Z  INFO: clearing and regenerating /etc/machine-id
	2025-09-19T22:35:42.612594527Z  + rm -f /etc/machine-id
	2025-09-19T22:35:42.613673665Z  + systemd-machine-id-setup
	2025-09-19T22:35:42.617037776Z  Initializing machine ID from random generator.
	2025-09-19T22:35:42.619925221Z  + fix_product_name
	2025-09-19T22:35:42.619944951Z  + [[ -f /sys/class/dmi/id/product_name ]]
	2025-09-19T22:35:42.619948300Z  + echo 'INFO: faking /sys/class/dmi/id/product_name to be "kind"'
	2025-09-19T22:35:42.619951429Z  INFO: faking /sys/class/dmi/id/product_name to be "kind"
	2025-09-19T22:35:42.619954527Z  + echo kind
	2025-09-19T22:35:42.621549122Z  + mount -o ro,bind /kind/product_name /sys/class/dmi/id/product_name
	2025-09-19T22:35:42.623458746Z  + fix_product_uuid
	2025-09-19T22:35:42.623475313Z  + [[ ! -f /kind/product_uuid ]]
	2025-09-19T22:35:42.623477743Z  + cat /proc/sys/kernel/random/uuid
	2025-09-19T22:35:42.624602450Z  + [[ -f /sys/class/dmi/id/product_uuid ]]
	2025-09-19T22:35:42.624617985Z  + echo 'INFO: faking /sys/class/dmi/id/product_uuid to be random'
	2025-09-19T22:35:42.624620801Z  INFO: faking /sys/class/dmi/id/product_uuid to be random
	2025-09-19T22:35:42.624622942Z  + mount -o ro,bind /kind/product_uuid /sys/class/dmi/id/product_uuid
	2025-09-19T22:35:42.626388236Z  + [[ -f /sys/devices/virtual/dmi/id/product_uuid ]]
	2025-09-19T22:35:42.626407824Z  + echo 'INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well'
	2025-09-19T22:35:42.626411878Z  INFO: faking /sys/devices/virtual/dmi/id/product_uuid as well
	2025-09-19T22:35:42.626414855Z  + mount -o ro,bind /kind/product_uuid /sys/devices/virtual/dmi/id/product_uuid
	2025-09-19T22:35:42.628176535Z  + select_iptables
	2025-09-19T22:35:42.628308385Z  + local mode num_legacy_lines num_nft_lines
	2025-09-19T22:35:42.629488519Z  ++ grep -c '^-'
	2025-09-19T22:35:42.633872321Z  ++ true
	2025-09-19T22:35:42.634020728Z  + num_legacy_lines=0
	2025-09-19T22:35:42.635140062Z  ++ grep -c '^-'
	2025-09-19T22:35:42.641960773Z  + num_nft_lines=6
	2025-09-19T22:35:42.641986364Z  + '[' 0 -ge 6 ']'
	2025-09-19T22:35:42.641990112Z  + mode=nft
	2025-09-19T22:35:42.641992790Z  + echo 'INFO: setting iptables to detected mode: nft'
	2025-09-19T22:35:42.641996001Z  INFO: setting iptables to detected mode: nft
	2025-09-19T22:35:42.641998836Z  + update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:35:42.642019570Z  + echo 'retryable update-alternatives: --set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:35:42.642022959Z  + local 'args=--set iptables /usr/sbin/iptables-nft'
	2025-09-19T22:35:42.642578691Z  ++ seq 0 15
	2025-09-19T22:35:42.643392478Z  + for i in $(seq 0 15)
	2025-09-19T22:35:42.643413365Z  + /usr/bin/update-alternatives --set iptables /usr/sbin/iptables-nft
	2025-09-19T22:35:42.644526123Z  + return
	2025-09-19T22:35:42.644544045Z  + update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:35:42.644547885Z  + echo 'retryable update-alternatives: --set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:35:42.644550824Z  + local 'args=--set ip6tables /usr/sbin/ip6tables-nft'
	2025-09-19T22:35:42.645034130Z  ++ seq 0 15
	2025-09-19T22:35:42.645945623Z  + for i in $(seq 0 15)
	2025-09-19T22:35:42.645957556Z  + /usr/bin/update-alternatives --set ip6tables /usr/sbin/ip6tables-nft
	2025-09-19T22:35:42.647013280Z  + return
	2025-09-19T22:35:42.647022359Z  + enable_network_magic
	2025-09-19T22:35:42.647024450Z  + local docker_embedded_dns_ip=127.0.0.11
	2025-09-19T22:35:42.647026292Z  + local docker_host_ip
	2025-09-19T22:35:42.648170528Z  ++ cut '-d ' -f1
	2025-09-19T22:35:42.648398262Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:35:42.648411987Z  +++ timeout 5 getent ahostsv4 host.docker.internal
	2025-09-19T22:35:42.698336724Z  + docker_host_ip=
	2025-09-19T22:35:42.698363046Z  + [[ -z '' ]]
	2025-09-19T22:35:42.699019106Z  ++ ip -4 route show default
	2025-09-19T22:35:42.699169696Z  ++ cut '-d ' -f3
	2025-09-19T22:35:42.701297069Z  + docker_host_ip=192.168.49.1
	2025-09-19T22:35:42.701638058Z  + iptables-save
	2025-09-19T22:35:42.702072700Z  + iptables-restore
	2025-09-19T22:35:42.704203665Z  + sed -e 's/-d 127.0.0.11/-d 192.168.49.1/g' -e 's/-A OUTPUT \(.*\) -j DOCKER_OUTPUT/\0\n-A PREROUTING \1 -j DOCKER_OUTPUT/' -e 's/--to-source :53/--to-source 192.168.49.1:53/g' -e 's/p -j DNAT --to-destination 127.0.0.11/p --dport 53 -j DNAT --to-destination 127.0.0.11/g'
	2025-09-19T22:35:42.710603367Z  + cp /etc/resolv.conf /etc/resolv.conf.original
	2025-09-19T22:35:42.712423343Z  ++ sed -e s/127.0.0.11/192.168.49.1/g /etc/resolv.conf.original
	2025-09-19T22:35:42.713733047Z  + replaced='# Generated by Docker Engine.
	2025-09-19T22:35:42.713749136Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:35:42.713751976Z  # has been modified.
	2025-09-19T22:35:42.713754043Z  
	2025-09-19T22:35:42.713755824Z  nameserver 192.168.49.1
	2025-09-19T22:35:42.713757676Z  search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:35:42.713759612Z  options edns0 trust-ad ndots:0
	2025-09-19T22:35:42.713771798Z  
	2025-09-19T22:35:42.713773517Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:35:42.713775419Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:35:42.713777089Z  # Overrides: []
	2025-09-19T22:35:42.713778882Z  # Option ndots from: internal'
	2025-09-19T22:35:42.713780483Z  + [[ '' == '' ]]
	2025-09-19T22:35:42.713782090Z  + echo '# Generated by Docker Engine.
	2025-09-19T22:35:42.713783964Z  # This file can be edited; Docker Engine will not make further changes once it
	2025-09-19T22:35:42.713785659Z  # has been modified.
	2025-09-19T22:35:42.713787245Z  
	2025-09-19T22:35:42.713788787Z  nameserver 192.168.49.1
	2025-09-19T22:35:42.713790476Z  search local us-east4-a.c.k8s-minikube.internal c.k8s-minikube.internal google.internal
	2025-09-19T22:35:42.713792584Z  options edns0 trust-ad ndots:0
	2025-09-19T22:35:42.713794469Z  
	2025-09-19T22:35:42.713795993Z  # Based on host file: '\''/etc/resolv.conf'\'' (internal resolver)
	2025-09-19T22:35:42.713797802Z  # ExtServers: [host(127.0.0.53)]
	2025-09-19T22:35:42.713799422Z  # Overrides: []
	2025-09-19T22:35:42.713801001Z  # Option ndots from: internal'
	2025-09-19T22:35:42.713909101Z  + files_to_update=('/etc/kubernetes/manifests/etcd.yaml' '/etc/kubernetes/manifests/kube-apiserver.yaml' '/etc/kubernetes/manifests/kube-controller-manager.yaml' '/etc/kubernetes/manifests/kube-scheduler.yaml' '/etc/kubernetes/controller-manager.conf' '/etc/kubernetes/scheduler.conf' '/kind/kubeadm.conf' '/var/lib/kubelet/kubeadm-flags.env')
	2025-09-19T22:35:42.713922860Z  + local files_to_update
	2025-09-19T22:35:42.713925532Z  + local should_fix_certificate=false
	2025-09-19T22:35:42.715146752Z  ++ cut '-d ' -f1
	2025-09-19T22:35:42.715220644Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:35:42.715711308Z  ++++ hostname
	2025-09-19T22:35:42.716727802Z  +++ timeout 5 getent ahostsv4 ha-984158-m04
	2025-09-19T22:35:42.719543689Z  + curr_ipv4=192.168.49.5
	2025-09-19T22:35:42.719557879Z  + echo 'INFO: Detected IPv4 address: 192.168.49.5'
	2025-09-19T22:35:42.719561128Z  INFO: Detected IPv4 address: 192.168.49.5
	2025-09-19T22:35:42.719564428Z  + '[' -f /kind/old-ipv4 ']'
	2025-09-19T22:35:42.719567275Z  + [[ -n 192.168.49.5 ]]
	2025-09-19T22:35:42.719570085Z  + echo -n 192.168.49.5
	2025-09-19T22:35:42.720733857Z  ++ cut '-d ' -f1
	2025-09-19T22:35:42.720814018Z  ++ head -n1 /dev/fd/63
	2025-09-19T22:35:42.721333305Z  ++++ hostname
	2025-09-19T22:35:42.722197887Z  +++ timeout 5 getent ahostsv6 ha-984158-m04
	2025-09-19T22:35:42.724845789Z  + curr_ipv6=
	2025-09-19T22:35:42.724865686Z  + echo 'INFO: Detected IPv6 address: '
	2025-09-19T22:35:42.724883911Z  INFO: Detected IPv6 address: 
	2025-09-19T22:35:42.724887077Z  + '[' -f /kind/old-ipv6 ']'
	2025-09-19T22:35:42.724892354Z  + [[ -n '' ]]
	2025-09-19T22:35:42.724895201Z  + false
	2025-09-19T22:35:42.725505166Z  ++ uname -a
	2025-09-19T22:35:42.726250121Z  + echo 'entrypoint completed: Linux ha-984158-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux'
	2025-09-19T22:35:42.726262138Z  entrypoint completed: Linux ha-984158-m04 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	2025-09-19T22:35:42.726264519Z  + exec /sbin/init
	2025-09-19T22:35:42.733766599Z  systemd 249.11-0ubuntu3.16 running in system mode (+PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
	2025-09-19T22:35:42.733790283Z  Detected virtualization docker.
	2025-09-19T22:35:42.733792812Z  Detected architecture x86-64.
	2025-09-19T22:35:42.733794734Z  
	2025-09-19T22:35:42.733796432Z  Welcome to Ubuntu 22.04.5 LTS!
	2025-09-19T22:35:42.733798556Z  
	2025-09-19T22:35:42.734236946Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:35:42.734254405Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:35:42.734258326Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:35:42.734261404Z  Exiting PID 1...
	
	-- /stdout --
	I0919 22:35:43.139984   78035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:35:43.197612   78035 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:35:43.187659886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:35:43.197684   78035 errors.go:98] postmortem docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 22:35:43.187659886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux A
rchitecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:fals
e Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:35:43.197793   78035 network_create.go:284] running [docker network inspect ha-984158-m04] to gather additional debugging logs...
	I0919 22:35:43.197813   78035 cli_runner.go:164] Run: docker network inspect ha-984158-m04
	W0919 22:35:43.214609   78035 cli_runner.go:211] docker network inspect ha-984158-m04 returned with exit code 1
	I0919 22:35:43.214644   78035 network_create.go:287] error running [docker network inspect ha-984158-m04]: docker network inspect ha-984158-m04: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-984158-m04 not found
	I0919 22:35:43.214657   78035 network_create.go:289] output of [docker network inspect ha-984158-m04]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-984158-m04 not found
	
	** /stderr **
	I0919 22:35:43.214714   78035 client.go:171] duration metric: took 5.784990953s to LocalClient.Create
	I0919 22:35:45.215825   78035 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:35:45.215882   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:45.235101   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:45.235230   78035 retry.go:31] will retry after 356.93694ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:45.592864   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:45.612167   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:45.612276   78035 retry.go:31] will retry after 430.82722ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:46.043975   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:46.063521   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:46.063647   78035 retry.go:31] will retry after 632.777392ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:46.697352   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:46.715397   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:35:46.715525   78035 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:35:46.715543   78035 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:46.715593   78035 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:35:46.715626   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:46.734826   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:46.734932   78035 retry.go:31] will retry after 241.748926ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:46.977510   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:46.996751   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:46.996847   78035 retry.go:31] will retry after 339.952272ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:47.337347   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:47.356841   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:35:47.356951   78035 retry.go:31] will retry after 561.38084ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:47.919364   78035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:35:47.939951   78035 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:35:47.940079   78035 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:35:47.940098   78035 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:35:47.940121   78035 start.go:128] duration metric: took 10.512608569s to createHost
	I0919 22:35:47.940130   78035 start.go:83] releasing machines lock for "ha-984158-m04", held for 10.512752231s
	W0919 22:35:47.940227   78035 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-984158" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-984158-m04" state Stopped: log: 2025-09-19T22:35:42.734236946Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:35:42.734254405Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:35:42.734258326Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:35:42.734261404Z  Exiting PID 1...: container exited unexpectedly
	* Failed to start docker container. Running "minikube delete -p ha-984158" may fix it: creating host: create: creating: prepare kic ssh: container name "ha-984158-m04" state Stopped: log: 2025-09-19T22:35:42.734236946Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:35:42.734254405Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:35:42.734258326Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:35:42.734261404Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:35:47.942560   78035 out.go:203] 
	W0919 22:35:47.944005   78035 out.go:285] X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-984158-m04" state Stopped: log: 2025-09-19T22:35:42.734236946Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:35:42.734254405Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:35:42.734258326Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:35:42.734261404Z  Exiting PID 1...: container exited unexpectedly
	X Exiting due to GUEST_PROVISION_EXIT_UNEXPECTED: Failed to start host: creating host: create: creating: prepare kic ssh: container name "ha-984158-m04" state Stopped: log: 2025-09-19T22:35:42.734236946Z  Failed to create control group inotify object: Too many open files
	2025-09-19T22:35:42.734254405Z  Failed to allocate manager object: Too many open files
	2025-09-19T22:35:42.734258326Z  [!!!!!!] Failed to allocate manager object.
	2025-09-19T22:35:42.734261404Z  Exiting PID 1...: container exited unexpectedly
	I0919 22:35:47.946218   78035 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:230: failed to add worker node to current ha (multi-control plane) cluster. args "out/minikube-linux-amd64 -p ha-984158 node add --alsologtostderr -v 5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-984158
helpers_test.go:243: (dbg) docker inspect ha-984158:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	        "Created": "2025-09-19T22:33:24.996172492Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68186,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:33:25.030742493Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hosts",
	        "LogPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca-json.log",
	        "Name": "/ha-984158",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-984158:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-984158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	                "LowerDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-984158",
	                "Source": "/var/lib/docker/volumes/ha-984158/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-984158",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-984158",
	                "name.minikube.sigs.k8s.io": "ha-984158",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b35e3615d35b58bcec7825bb039821b1dfb6293e56fe1316d0ae491d5b3b0558",
	            "SandboxKey": "/var/run/docker/netns/b35e3615d35b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-984158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:4d:99:af:3d:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1b6c79ac61dbabfd8f1ce8959ab9a2616212ddaf4680b1bb2cc7b6f6005d0e",
	                    "EndpointID": "150c15de67a84040f10d82e99ed82c2230b34908474820017c5633e8a5513d79",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-984158",
	                        "0e7c4b5cff2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-984158 -n ha-984158
helpers_test.go:252: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 logs -n 25: (1.227715253s)
helpers_test.go:260: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                           ARGS                                                            │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ service │ functional-393395 service hello-node --url --format={{.IP}}                                                               │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │                     │
	│ service │ functional-393395 service hello-node --url                                                                                │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │                     │
	│ delete  │ -p functional-393395                                                                                                      │ functional-393395 │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │ 19 Sep 25 22:33 UTC │
	│ start   │ ha-984158 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio           │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:33 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml                                                          │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- rollout status deployment/busybox                                                                    │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'                                                      │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-8s7jn -- nslookup kubernetes.io                                              │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-c7qf4 -- nslookup kubernetes.io                                              │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-rnjl7 -- nslookup kubernetes.io                                              │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-8s7jn -- nslookup kubernetes.default                                         │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-c7qf4 -- nslookup kubernetes.default                                         │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-rnjl7 -- nslookup kubernetes.default                                         │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-8s7jn -- nslookup kubernetes.default.svc.cluster.local                       │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-c7qf4 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-rnjl7 -- nslookup kubernetes.default.svc.cluster.local                       │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'                                                     │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-8s7jn -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-8s7jn -- sh -c ping -c 1 192.168.49.1                                        │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-c7qf4 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-c7qf4 -- sh -c ping -c 1 192.168.49.1                                        │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-rnjl7 -- sh -c nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3 │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ kubectl │ ha-984158 kubectl -- exec busybox-7b57f96db7-rnjl7 -- sh -c ping -c 1 192.168.49.1                                        │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ node    │ ha-984158 node add --alsologtostderr -v 5                                                                                 │ ha-984158         │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:33:19
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:33:19.901060   67622 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:19.901185   67622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:19.901193   67622 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:19.901198   67622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:19.901448   67622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:33:19.902017   67622 out.go:368] Setting JSON to false
	I0919 22:33:19.903166   67622 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4550,"bootTime":1758316650,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:33:19.903283   67622 start.go:140] virtualization: kvm guest
	I0919 22:33:19.906578   67622 out.go:179] * [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:33:19.908489   67622 notify.go:220] Checking for updates...
	I0919 22:33:19.908508   67622 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:33:19.910361   67622 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:33:19.912958   67622 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:33:19.914823   67622 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:33:19.919772   67622 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:33:19.921444   67622 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:33:19.923242   67622 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:33:19.947549   67622 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:33:19.947649   67622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:20.004707   67622 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:33:19.994191177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:20.004832   67622 docker.go:318] overlay module found
	I0919 22:33:20.006907   67622 out.go:179] * Using the docker driver based on user configuration
	I0919 22:33:20.008195   67622 start.go:304] selected driver: docker
	I0919 22:33:20.008214   67622 start.go:918] validating driver "docker" against <nil>
	I0919 22:33:20.008227   67622 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:33:20.008818   67622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:20.067697   67622 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:33:20.055128215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:20.067871   67622 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:33:20.068167   67622 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:33:20.070129   67622 out.go:179] * Using Docker driver with root privileges
	I0919 22:33:20.071439   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:20.071513   67622 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:33:20.071523   67622 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:33:20.071600   67622 start.go:348] cluster config:
	{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:20.073188   67622 out.go:179] * Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	I0919 22:33:20.074628   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:33:20.076439   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:33:20.078066   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:20.078159   67622 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:33:20.078159   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:33:20.078174   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:33:20.078333   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:33:20.078348   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:33:20.078744   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:20.078777   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json: {Name:mk745b6092cc48756321ca371e559184d12db2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:20.100036   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:33:20.100059   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:33:20.100081   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:33:20.100133   67622 start.go:360] acquireMachinesLock for ha-984158: {Name:mkc72a6d4fef468a73a10e88f019b77c34dadd97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:33:20.100248   67622 start.go:364] duration metric: took 93.303µs to acquireMachinesLock for "ha-984158"
	I0919 22:33:20.100277   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:20.100380   67622 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:33:20.103382   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:33:20.103623   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:33:20.103675   67622 client.go:168] LocalClient.Create starting
	I0919 22:33:20.103751   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:33:20.103785   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:20.103799   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:20.103860   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:33:20.103880   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:20.103895   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:20.104259   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:33:20.122340   67622 cli_runner.go:211] docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:33:20.122418   67622 network_create.go:284] running [docker network inspect ha-984158] to gather additional debugging logs...
	I0919 22:33:20.122455   67622 cli_runner.go:164] Run: docker network inspect ha-984158
	W0919 22:33:20.139578   67622 cli_runner.go:211] docker network inspect ha-984158 returned with exit code 1
	I0919 22:33:20.139605   67622 network_create.go:287] error running [docker network inspect ha-984158]: docker network inspect ha-984158: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-984158 not found
	I0919 22:33:20.139623   67622 network_create.go:289] output of [docker network inspect ha-984158]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-984158 not found
	
	** /stderr **
	I0919 22:33:20.139738   67622 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:20.159001   67622 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b807f0}
	I0919 22:33:20.159067   67622 network_create.go:124] attempt to create docker network ha-984158 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:33:20.159151   67622 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-984158 ha-984158
	I0919 22:33:20.220465   67622 network_create.go:108] docker network ha-984158 192.168.49.0/24 created
	I0919 22:33:20.220505   67622 kic.go:121] calculated static IP "192.168.49.2" for the "ha-984158" container
	I0919 22:33:20.220576   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:33:20.238299   67622 cli_runner.go:164] Run: docker volume create ha-984158 --label name.minikube.sigs.k8s.io=ha-984158 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:33:20.257860   67622 oci.go:103] Successfully created a docker volume ha-984158
	I0919 22:33:20.258049   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158 --entrypoint /usr/bin/test -v ha-984158:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:33:20.650160   67622 oci.go:107] Successfully prepared a docker volume ha-984158
	I0919 22:33:20.650207   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:20.650234   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:33:20.650319   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:33:24.923696   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.273335756s)
	I0919 22:33:24.923745   67622 kic.go:203] duration metric: took 4.273508289s to extract preloaded images to volume ...
	W0919 22:33:24.923837   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:33:24.923868   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:33:24.923905   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:33:24.980440   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158 --name ha-984158 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158 --network ha-984158 --ip 192.168.49.2 --volume ha-984158:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:33:25.243904   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Running}}
	I0919 22:33:25.262964   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:25.282632   67622 cli_runner.go:164] Run: docker exec ha-984158 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:33:25.335702   67622 oci.go:144] the created container "ha-984158" has a running status.
	I0919 22:33:25.335743   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa...
	I0919 22:33:26.151425   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:33:26.151477   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:33:26.176792   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:26.194873   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:33:26.194911   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:33:26.242371   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:26.260832   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:33:26.260926   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.280776   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.281060   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.281074   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:33:26.419419   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:33:26.419451   67622 ubuntu.go:182] provisioning hostname "ha-984158"
	I0919 22:33:26.419523   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.438011   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.438316   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.438334   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158 && echo "ha-984158" | sudo tee /etc/hostname
	I0919 22:33:26.587806   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:33:26.587878   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.606861   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.607093   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.607134   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:33:26.743969   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:33:26.744008   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:33:26.744055   67622 ubuntu.go:190] setting up certificates
	I0919 22:33:26.744068   67622 provision.go:84] configureAuth start
	I0919 22:33:26.744152   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:26.765302   67622 provision.go:143] copyHostCerts
	I0919 22:33:26.765368   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:26.765405   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:33:26.765414   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:26.765489   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:33:26.765575   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:26.765596   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:33:26.765600   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:26.765626   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:33:26.765682   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:26.765696   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:33:26.765702   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:26.765725   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:33:26.765773   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158 san=[127.0.0.1 192.168.49.2 ha-984158 localhost minikube]
	I0919 22:33:27.052522   67622 provision.go:177] copyRemoteCerts
	I0919 22:33:27.052586   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:33:27.052619   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.077750   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.179645   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:33:27.179718   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:33:27.210288   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:33:27.210351   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:33:27.238586   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:33:27.238648   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:33:27.264405   67622 provision.go:87] duration metric: took 520.31998ms to configureAuth
	I0919 22:33:27.264432   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:33:27.264630   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:27.264744   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.284923   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:27.285168   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:27.285188   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:33:27.533206   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:33:27.533232   67622 machine.go:96] duration metric: took 1.272377771s to provisionDockerMachine
	I0919 22:33:27.533245   67622 client.go:171] duration metric: took 7.429561262s to LocalClient.Create
	I0919 22:33:27.533269   67622 start.go:167] duration metric: took 7.429646395s to libmachine.API.Create "ha-984158"
	I0919 22:33:27.533281   67622 start.go:293] postStartSetup for "ha-984158" (driver="docker")
	I0919 22:33:27.533292   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:33:27.533378   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:33:27.533430   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.551574   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.651298   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:33:27.655006   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:33:27.655037   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:33:27.655045   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:33:27.655051   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:33:27.655070   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:33:27.655147   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:33:27.655229   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:33:27.655238   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:33:27.655339   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:33:27.664695   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:27.695230   67622 start.go:296] duration metric: took 161.927495ms for postStartSetup
	I0919 22:33:27.695585   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:27.713847   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:27.714141   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:27.714182   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.735921   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.829368   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:33:27.833923   67622 start.go:128] duration metric: took 7.733528511s to createHost
	I0919 22:33:27.833953   67622 start.go:83] releasing machines lock for "ha-984158", held for 7.733693746s
	I0919 22:33:27.834022   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:27.851363   67622 ssh_runner.go:195] Run: cat /version.json
	I0919 22:33:27.851382   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:33:27.851422   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.851435   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.870773   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.871172   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:28.037834   67622 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:28.042707   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:33:28.184533   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:33:28.189494   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:28.213778   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:33:28.213869   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:28.245273   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:33:28.245311   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:33:28.245342   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:33:28.245409   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:33:28.260712   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:33:28.273221   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:33:28.273285   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:33:28.287690   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:33:28.303163   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:33:28.371756   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:33:28.449427   67622 docker.go:234] disabling docker service ...
	I0919 22:33:28.449499   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:33:28.467447   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:33:28.481298   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:33:28.558342   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:33:28.661953   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:33:28.675151   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:33:28.695465   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:33:28.695540   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.709844   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:33:28.709908   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.720817   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.731627   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.742506   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:33:28.753955   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.765830   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.784178   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.795285   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:33:28.804935   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:33:28.814326   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:28.918546   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:33:29.014541   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:33:29.014608   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:33:29.018746   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:33:29.018808   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:33:29.023643   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:33:29.059951   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:33:29.060029   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:29.098887   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:29.139500   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:33:29.141059   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:29.158455   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:33:29.162464   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:29.175140   67622 kubeadm.go:875] updating cluster {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Soc
ketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:33:29.175280   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:29.175333   67622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:33:29.248936   67622 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:33:29.248961   67622 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:33:29.249018   67622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:33:29.287448   67622 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:33:29.287472   67622 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:33:29.287479   67622 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:33:29.287577   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:33:29.287645   67622 ssh_runner.go:195] Run: crio config
	I0919 22:33:29.333242   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:29.333266   67622 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:33:29.333277   67622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:33:29.333307   67622 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-984158 NodeName:ha-984158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:33:29.333435   67622 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-984158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:33:29.333463   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:33:29.333506   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:33:29.346933   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:29.347143   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:33:29.347207   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:33:29.356691   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:33:29.356785   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:33:29.366595   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0919 22:33:29.386942   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:33:29.409639   67622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0919 22:33:29.428838   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:33:29.449681   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:33:29.453679   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:29.465645   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:29.534315   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:33:29.558739   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.2
	I0919 22:33:29.558767   67622 certs.go:194] generating shared ca certs ...
	I0919 22:33:29.558787   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:29.558925   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:33:29.558985   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:33:29.559000   67622 certs.go:256] generating profile certs ...
	I0919 22:33:29.559069   67622 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:33:29.559085   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt with IP's: []
	I0919 22:33:30.287530   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt ...
	I0919 22:33:30.287574   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt: {Name:mk4722cc3499628a90845973a8533bb2f9abaeaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.287824   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key ...
	I0919 22:33:30.287842   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key: {Name:mk95f513fb24356a441cd3443b0c241a35c61186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.287965   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f
	I0919 22:33:30.287986   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:33:30.489410   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f ...
	I0919 22:33:30.489443   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f: {Name:mk50e3acb42d56649151d2b237558cdb8ee1e1f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.489635   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f ...
	I0919 22:33:30.489654   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f: {Name:mke306934752782de0837143fc2872d74f6e5eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.489765   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:33:30.489897   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:33:30.489990   67622 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:33:30.490013   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt with IP's: []
	I0919 22:33:30.692692   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt ...
	I0919 22:33:30.692725   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt: {Name:mkec855f3fc5cc887af952272036f6b6db6c122d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.692913   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key ...
	I0919 22:33:30.692929   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key: {Name:mk41b934f9d330e25cbaab5814efeb52422665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.693033   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:33:30.693058   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:33:30.693082   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:33:30.693113   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:33:30.693131   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:33:30.693163   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:33:30.693182   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:33:30.693202   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:33:30.693280   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:33:30.693327   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:33:30.693343   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:33:30.693379   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:33:30.693413   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:33:30.693444   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:33:30.693498   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:30.693554   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:33:30.693575   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:30.693594   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:33:30.694169   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:33:30.721034   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:33:30.747256   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:33:30.773231   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:33:30.799758   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:33:30.825801   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:33:30.852404   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:33:30.879195   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:33:30.905339   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:33:30.934694   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:33:30.960677   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:33:30.987763   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:33:31.008052   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:33:31.014839   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:33:31.025609   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.029511   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.029570   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.036708   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:33:31.047387   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:33:31.058096   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.062519   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.062579   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.070083   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:33:31.080599   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:33:31.091228   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.095407   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.095480   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.102644   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:33:31.114044   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:33:31.118226   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:33:31.118374   67622 kubeadm.go:392] StartCluster: {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:31.118467   67622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:33:31.118521   67622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:33:31.155950   67622 cri.go:89] found id: ""
	I0919 22:33:31.156024   67622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:33:31.166037   67622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:33:31.175817   67622 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:33:31.175867   67622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:33:31.185690   67622 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:33:31.185707   67622 kubeadm.go:157] found existing configuration files:
	
	I0919 22:33:31.185748   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:33:31.195069   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:33:31.195184   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:33:31.204614   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:33:31.216208   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:33:31.216271   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:33:31.226344   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:33:31.239080   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:33:31.239168   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:33:31.248993   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:33:31.258113   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:33:31.258175   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:33:31.267147   67622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:33:31.307922   67622 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:33:31.308018   67622 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:33:31.323647   67622 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:33:31.323774   67622 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:33:31.323839   67622 kubeadm.go:310] OS: Linux
	I0919 22:33:31.323926   67622 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:33:31.324015   67622 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:33:31.324149   67622 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:33:31.324222   67622 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:33:31.324293   67622 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:33:31.324356   67622 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:33:31.324417   67622 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:33:31.324484   67622 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:33:31.377266   67622 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:33:31.377414   67622 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:33:31.377573   67622 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:33:31.384351   67622 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:33:31.386660   67622 out.go:252]   - Generating certificates and keys ...
	I0919 22:33:31.386732   67622 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:33:31.386811   67622 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:33:31.789403   67622 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:33:31.939575   67622 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:33:32.401218   67622 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:33:32.595052   67622 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:33:33.118331   67622 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:33:33.118543   67622 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-984158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:33:34.059417   67622 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:33:34.059600   67622 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-984158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:33:34.382200   67622 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:33:34.860984   67622 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:33:34.940846   67622 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:33:34.940919   67622 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:33:35.161325   67622 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:33:35.301598   67622 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:33:35.610006   67622 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:33:35.767736   67622 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:33:36.001912   67622 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:33:36.002376   67622 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:33:36.005697   67622 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:33:36.010843   67622 out.go:252]   - Booting up control plane ...
	I0919 22:33:36.010955   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:33:36.011044   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:33:36.011162   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:33:36.018352   67622 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:33:36.018463   67622 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:33:36.024835   67622 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:33:36.025002   67622 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:33:36.025072   67622 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:33:36.099408   67622 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:33:36.099593   67622 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:33:37.100521   67622 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001186505s
	I0919 22:33:37.103674   67622 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:33:37.103813   67622 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:33:37.103961   67622 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:33:37.104092   67622 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:33:38.781776   67622 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.678113429s
	I0919 22:33:39.011334   67622 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 1.907735584s
	I0919 22:33:43.273677   67622 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.17006372s
	I0919 22:33:43.285923   67622 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:33:43.298989   67622 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:33:43.310631   67622 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:33:43.310870   67622 kubeadm.go:310] [mark-control-plane] Marking the node ha-984158 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:33:43.319951   67622 kubeadm.go:310] [bootstrap-token] Using token: wc3lep.4w3ocubibd25hbwe
	I0919 22:33:43.321976   67622 out.go:252]   - Configuring RBAC rules ...
	I0919 22:33:43.322154   67622 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:33:43.325670   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:33:43.333517   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:33:43.338509   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:33:43.342046   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:33:43.345237   67622 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:33:43.680686   67622 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:33:44.099041   67622 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:33:44.680531   67622 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:33:44.681480   67622 kubeadm.go:310] 
	I0919 22:33:44.681572   67622 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:33:44.681591   67622 kubeadm.go:310] 
	I0919 22:33:44.681690   67622 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:33:44.681708   67622 kubeadm.go:310] 
	I0919 22:33:44.681761   67622 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:33:44.681854   67622 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:33:44.681910   67622 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:33:44.681916   67622 kubeadm.go:310] 
	I0919 22:33:44.681968   67622 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:33:44.681978   67622 kubeadm.go:310] 
	I0919 22:33:44.682015   67622 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:33:44.682021   67622 kubeadm.go:310] 
	I0919 22:33:44.682066   67622 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:33:44.682162   67622 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:33:44.682244   67622 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:33:44.682258   67622 kubeadm.go:310] 
	I0919 22:33:44.682378   67622 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:33:44.682497   67622 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:33:44.682510   67622 kubeadm.go:310] 
	I0919 22:33:44.682620   67622 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wc3lep.4w3ocubibd25hbwe \
	I0919 22:33:44.682733   67622 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 \
	I0919 22:33:44.682757   67622 kubeadm.go:310] 	--control-plane 
	I0919 22:33:44.682761   67622 kubeadm.go:310] 
	I0919 22:33:44.682837   67622 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:33:44.682844   67622 kubeadm.go:310] 
	I0919 22:33:44.682919   67622 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wc3lep.4w3ocubibd25hbwe \
	I0919 22:33:44.683036   67622 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 
	I0919 22:33:44.685970   67622 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:33:44.686071   67622 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:33:44.686097   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:44.686119   67622 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:33:44.688616   67622 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:33:44.690471   67622 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:33:44.695364   67622 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:33:44.695381   67622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:33:44.715791   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:33:44.939557   67622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:33:44.939639   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:44.939678   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158 minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=true
	I0919 22:33:45.023827   67622 ops.go:34] apiserver oom_adj: -16
	I0919 22:33:45.023957   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:45.524455   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:46.024018   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:46.524600   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.024332   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.524121   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.592879   67622 kubeadm.go:1105] duration metric: took 2.653303844s to wait for elevateKubeSystemPrivileges
	I0919 22:33:47.592920   67622 kubeadm.go:394] duration metric: took 16.47455539s to StartCluster
	I0919 22:33:47.592944   67622 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:47.593012   67622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:33:47.593661   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:47.593878   67622 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:47.593899   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:33:47.593915   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:33:47.593910   67622 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:33:47.593968   67622 addons.go:69] Setting storage-provisioner=true in profile "ha-984158"
	I0919 22:33:47.593987   67622 addons.go:238] Setting addon storage-provisioner=true in "ha-984158"
	I0919 22:33:47.594014   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:47.594020   67622 addons.go:69] Setting default-storageclass=true in profile "ha-984158"
	I0919 22:33:47.594052   67622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-984158"
	I0919 22:33:47.594180   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:47.594397   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.594490   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.616114   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:33:47.616790   67622 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:33:47.616815   67622 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:33:47.616821   67622 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:33:47.616827   67622 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:33:47.616832   67622 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:33:47.616874   67622 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:33:47.617290   67622 addons.go:238] Setting addon default-storageclass=true in "ha-984158"
	I0919 22:33:47.617334   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:47.617664   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.618198   67622 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:33:47.619811   67622 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:33:47.619828   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:33:47.619873   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:47.639214   67622 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:33:47.639233   67622 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:33:47.639292   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:47.639429   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:47.661245   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:47.673462   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:33:47.757401   67622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:33:47.772885   67622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:33:47.832329   67622 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:33:48.046946   67622 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:33:48.048036   67622 addons.go:514] duration metric: took 454.124749ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:33:48.048079   67622 start.go:246] waiting for cluster config update ...
	I0919 22:33:48.048094   67622 start.go:255] writing updated cluster config ...
	I0919 22:33:48.049801   67622 out.go:203] 
	I0919 22:33:48.051165   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:48.051243   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:48.053137   67622 out.go:179] * Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	I0919 22:33:48.054674   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:33:48.056311   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:33:48.057779   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:48.057806   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:33:48.057888   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:33:48.057928   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:33:48.057940   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:33:48.058063   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:48.078572   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:33:48.078592   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:33:48.078612   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:33:48.078641   67622 start.go:360] acquireMachinesLock for ha-984158-m02: {Name:mk33ccd18791cf0a87d18f7af68677fa10224c04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:33:48.078744   67622 start.go:364] duration metric: took 83.645µs to acquireMachinesLock for "ha-984158-m02"
	I0919 22:33:48.078773   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:48.078850   67622 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:33:48.081555   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:33:48.081669   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:33:48.081703   67622 client.go:168] LocalClient.Create starting
	I0919 22:33:48.081781   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:33:48.081822   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:48.081843   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:48.081910   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:33:48.081940   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:48.081960   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:48.082241   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:48.099940   67622 network_create.go:77] Found existing network {name:ha-984158 subnet:0xc0016638f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:33:48.099978   67622 kic.go:121] calculated static IP "192.168.49.3" for the "ha-984158-m02" container
	I0919 22:33:48.100047   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:33:48.119768   67622 cli_runner.go:164] Run: docker volume create ha-984158-m02 --label name.minikube.sigs.k8s.io=ha-984158-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:33:48.140861   67622 oci.go:103] Successfully created a docker volume ha-984158-m02
	I0919 22:33:48.140948   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m02 --entrypoint /usr/bin/test -v ha-984158-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:33:48.564029   67622 oci.go:107] Successfully prepared a docker volume ha-984158-m02
	I0919 22:33:48.564088   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:48.564128   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:33:48.564199   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:33:52.827364   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.263115206s)
	I0919 22:33:52.827395   67622 kic.go:203] duration metric: took 4.263265347s to extract preloaded images to volume ...
	W0919 22:33:52.827486   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:33:52.827514   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:33:52.827554   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:33:52.885075   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158-m02 --name ha-984158-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158-m02 --network ha-984158 --ip 192.168.49.3 --volume ha-984158-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:33:53.180687   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Running}}
	I0919 22:33:53.199679   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.219636   67622 cli_runner.go:164] Run: docker exec ha-984158-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:33:53.277586   67622 oci.go:144] the created container "ha-984158-m02" has a running status.
	I0919 22:33:53.277613   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa...
	I0919 22:33:53.439379   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:33:53.439435   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:33:53.481669   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.502631   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:33:53.502661   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:33:53.550818   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.569934   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:33:53.570033   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.591163   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.591567   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.591594   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:33:53.732425   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:33:53.732454   67622 ubuntu.go:182] provisioning hostname "ha-984158-m02"
	I0919 22:33:53.732620   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.753544   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.753771   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.753787   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m02 && echo "ha-984158-m02" | sudo tee /etc/hostname
	I0919 22:33:53.905778   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:33:53.905859   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.925947   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.926237   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.926262   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:33:54.064017   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:33:54.064058   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:33:54.064091   67622 ubuntu.go:190] setting up certificates
	I0919 22:33:54.064128   67622 provision.go:84] configureAuth start
	I0919 22:33:54.064205   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:54.083365   67622 provision.go:143] copyHostCerts
	I0919 22:33:54.083408   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:54.083437   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:33:54.083446   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:54.083518   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:33:54.083599   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:54.083619   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:33:54.083625   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:54.083651   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:33:54.083695   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:54.083712   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:33:54.083718   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:54.083741   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:33:54.083825   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m02 san=[127.0.0.1 192.168.49.3 ha-984158-m02 localhost minikube]
	I0919 22:33:54.283812   67622 provision.go:177] copyRemoteCerts
	I0919 22:33:54.283869   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:33:54.283908   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.302357   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:54.401996   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:33:54.402067   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:33:54.430462   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:33:54.430540   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:33:54.457015   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:33:54.457097   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:33:54.483980   67622 provision.go:87] duration metric: took 419.834494ms to configureAuth
	I0919 22:33:54.484006   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:33:54.484189   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:54.484291   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.502801   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:54.503005   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:54.503020   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:33:54.741937   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:33:54.741974   67622 machine.go:96] duration metric: took 1.172016504s to provisionDockerMachine
	I0919 22:33:54.741989   67622 client.go:171] duration metric: took 6.660276334s to LocalClient.Create
	I0919 22:33:54.742015   67622 start.go:167] duration metric: took 6.660346483s to libmachine.API.Create "ha-984158"
	I0919 22:33:54.742030   67622 start.go:293] postStartSetup for "ha-984158-m02" (driver="docker")
	I0919 22:33:54.742043   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:33:54.742141   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:33:54.742204   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.760779   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:54.861057   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:33:54.864884   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:33:54.864926   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:33:54.864936   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:33:54.864942   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:33:54.864952   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:33:54.865018   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:33:54.865096   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:33:54.865119   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:33:54.865208   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:33:54.874518   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:54.902675   67622 start.go:296] duration metric: took 160.632418ms for postStartSetup
	I0919 22:33:54.903619   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:54.921915   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:54.922275   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:54.922332   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.939498   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.032204   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:33:55.036544   67622 start.go:128] duration metric: took 6.957677622s to createHost
	I0919 22:33:55.036576   67622 start.go:83] releasing machines lock for "ha-984158-m02", held for 6.957813538s
	I0919 22:33:55.036645   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:55.056621   67622 out.go:179] * Found network options:
	I0919 22:33:55.058171   67622 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:33:55.059521   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:33:55.059575   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:33:55.059642   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:33:55.059693   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:55.059730   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:33:55.059795   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:55.079269   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.079505   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.307919   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:33:55.312965   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:55.336548   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:33:55.336628   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:55.368875   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:33:55.368896   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:33:55.368929   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:33:55.368975   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:33:55.384084   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:33:55.396627   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:33:55.396684   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:33:55.411878   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:33:55.426921   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:33:55.498750   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:33:55.574511   67622 docker.go:234] disabling docker service ...
	I0919 22:33:55.574592   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:33:55.592451   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:33:55.605407   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:33:55.676576   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:33:55.779960   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:33:55.791691   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:33:55.810222   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:33:55.810287   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.823669   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:33:55.823742   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.835957   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.848163   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.862113   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:33:55.874185   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.886226   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.904556   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.915914   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:33:55.925425   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:33:55.934730   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:56.048946   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:33:56.146544   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:33:56.146625   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:33:56.150812   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:33:56.150868   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:33:56.155192   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:33:56.191696   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:33:56.191785   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:56.233991   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:56.274090   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:33:56.275720   67622 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:33:56.276812   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:56.294583   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:33:56.298596   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:56.311418   67622 mustload.go:65] Loading cluster: ha-984158
	I0919 22:33:56.311645   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:56.311889   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:56.330141   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:56.330381   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.3
	I0919 22:33:56.330391   67622 certs.go:194] generating shared ca certs ...
	I0919 22:33:56.330404   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.330513   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:33:56.330548   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:33:56.330558   67622 certs.go:256] generating profile certs ...
	I0919 22:33:56.330645   67622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:33:56.330671   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648
	I0919 22:33:56.330686   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:33:56.589696   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 ...
	I0919 22:33:56.589724   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648: {Name:mk231e62d196ad4ac4ba36bf02a832f78de0258d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.589931   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648 ...
	I0919 22:33:56.589950   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648: {Name:mkf30412a461a8bacfd366640c7d4da1146a9418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.590056   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:33:56.590233   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:33:56.590374   67622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:33:56.590389   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:33:56.590402   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:33:56.590416   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:33:56.590429   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:33:56.590440   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:33:56.590450   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:33:56.590459   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:33:56.590476   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:33:56.590527   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:33:56.590552   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:33:56.590561   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:33:56.590584   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:33:56.590605   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:33:56.590626   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:33:56.590665   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:56.590692   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:33:56.590708   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:56.590721   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:33:56.590767   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:56.609877   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:56.698485   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:33:56.703209   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:33:56.716550   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:33:56.720735   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:33:56.733890   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:33:56.737616   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:33:56.750557   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:33:56.754948   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:33:56.770690   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:33:56.774864   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:33:56.787587   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:33:56.791154   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:33:56.804497   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:33:56.832411   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:33:56.858185   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:33:56.885311   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:33:56.911248   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:33:56.937552   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:33:56.963365   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:33:56.988811   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:33:57.014413   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:33:57.043525   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:33:57.069549   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:33:57.095993   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:33:57.115254   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:33:57.135395   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:33:57.155031   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:33:57.175220   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:33:57.194674   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:33:57.215027   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:33:57.235048   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:33:57.240702   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:33:57.251492   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.255754   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.255806   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.263388   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:33:57.274606   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:33:57.285494   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.289707   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.289758   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.296995   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:33:57.307702   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:33:57.318927   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.323131   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.323194   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.330266   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:33:57.340891   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:33:57.344726   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:33:57.344784   67622 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0919 22:33:57.344872   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:33:57.344897   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:33:57.344937   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:33:57.357462   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:57.357529   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:33:57.357582   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:33:57.367667   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:33:57.367722   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:33:57.377333   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:33:57.395969   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:33:57.418145   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:33:57.439308   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:33:57.443458   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:57.454967   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:57.522382   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:33:57.545690   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:57.545979   67622 start.go:317] joinCluster: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:57.546124   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:33:57.546185   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:57.565712   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:57.714381   67622 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:57.714452   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0rc9ka.7s4jxjfzbvya269x --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:34:14.891768   67622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0rc9ka.7s4jxjfzbvya269x --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (17.177290621s)
	I0919 22:34:14.891806   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:34:15.112649   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158-m02 minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=false
	I0919 22:34:15.189152   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-984158-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:34:15.268843   67622 start.go:319] duration metric: took 17.722860685s to joinCluster
	I0919 22:34:15.268921   67622 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:15.269212   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:15.270715   67622 out.go:179] * Verifying Kubernetes components...
	I0919 22:34:15.272193   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:15.373529   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:15.387143   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:34:15.387217   67622 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:34:15.387440   67622 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m02" to be "Ready" ...
	W0919 22:34:17.391040   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:19.391218   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:21.391885   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:23.891865   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:25.892208   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	I0919 22:34:28.391466   67622 node_ready.go:49] node "ha-984158-m02" is "Ready"
	I0919 22:34:28.391502   67622 node_ready.go:38] duration metric: took 13.004045549s for node "ha-984158-m02" to be "Ready" ...
	I0919 22:34:28.391521   67622 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:34:28.391578   67622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:34:28.403875   67622 api_server.go:72] duration metric: took 13.134915716s to wait for apiserver process to appear ...
	I0919 22:34:28.403907   67622 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:34:28.403928   67622 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:34:28.409570   67622 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:34:28.410599   67622 api_server.go:141] control plane version: v1.34.0
	I0919 22:34:28.410630   67622 api_server.go:131] duration metric: took 6.715556ms to wait for apiserver health ...
	I0919 22:34:28.410646   67622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:34:28.415646   67622 system_pods.go:59] 17 kube-system pods found
	I0919 22:34:28.415679   67622 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:34:28.415685   67622 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:34:28.415689   67622 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:34:28.415692   67622 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:34:28.415695   67622 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:34:28.415698   67622 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:34:28.415701   67622 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:34:28.415704   67622 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:34:28.415707   67622 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:34:28.415710   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:34:28.415713   67622 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:34:28.415715   67622 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:34:28.415718   67622 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:34:28.415721   67622 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:34:28.415723   67622 system_pods.go:61] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:34:28.415726   67622 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:34:28.415729   67622 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:34:28.415734   67622 system_pods.go:74] duration metric: took 5.082988ms to wait for pod list to return data ...
	I0919 22:34:28.415742   67622 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:34:28.418466   67622 default_sa.go:45] found service account: "default"
	I0919 22:34:28.418487   67622 default_sa.go:55] duration metric: took 2.73954ms for default service account to be created ...
	I0919 22:34:28.418498   67622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:34:28.422326   67622 system_pods.go:86] 17 kube-system pods found
	I0919 22:34:28.422351   67622 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:34:28.422357   67622 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:34:28.422361   67622 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:34:28.422365   67622 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:34:28.422368   67622 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:34:28.422376   67622 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:34:28.422379   67622 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:34:28.422383   67622 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:34:28.422386   67622 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:34:28.422390   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:34:28.422393   67622 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:34:28.422396   67622 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:34:28.422399   67622 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:34:28.422402   67622 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:34:28.422405   67622 system_pods.go:89] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:34:28.422408   67622 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:34:28.422415   67622 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:34:28.422421   67622 system_pods.go:126] duration metric: took 3.917676ms to wait for k8s-apps to be running ...
	I0919 22:34:28.422429   67622 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:34:28.422473   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:34:28.434607   67622 system_svc.go:56] duration metric: took 12.16943ms WaitForService to wait for kubelet
	I0919 22:34:28.434637   67622 kubeadm.go:578] duration metric: took 13.165683838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:34:28.434659   67622 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:34:28.437727   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:34:28.437756   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:34:28.437777   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:34:28.437784   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:34:28.437791   67622 node_conditions.go:105] duration metric: took 3.125214ms to run NodePressure ...
	I0919 22:34:28.437804   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:34:28.437837   67622 start.go:255] writing updated cluster config ...
	I0919 22:34:28.440033   67622 out.go:203] 
	I0919 22:34:28.441576   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:28.441673   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:28.443252   67622 out.go:179] * Starting "ha-984158-m03" control-plane node in "ha-984158" cluster
	I0919 22:34:28.444693   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:34:28.446038   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:28.447156   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:34:28.447185   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:28.447193   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:28.447285   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:28.447301   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:34:28.447448   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:28.469851   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:28.469873   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:28.469889   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:28.469913   67622 start.go:360] acquireMachinesLock for ha-984158-m03: {Name:mkf33267bff56ae1cde0b805408b7f6393558146 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:28.470008   67622 start.go:364] duration metric: took 81.331µs to acquireMachinesLock for "ha-984158-m03"
	I0919 22:34:28.470041   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:28.470170   67622 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:34:28.472544   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:34:28.472649   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:34:28.472677   67622 client.go:168] LocalClient.Create starting
	I0919 22:34:28.472742   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:34:28.472780   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:34:28.472799   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:34:28.472861   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:34:28.472888   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:34:28.472901   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:34:28.473209   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:28.490760   67622 network_create.go:77] Found existing network {name:ha-984158 subnet:0xc001af8060 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:34:28.490805   67622 kic.go:121] calculated static IP "192.168.49.4" for the "ha-984158-m03" container
	I0919 22:34:28.490880   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:34:28.509896   67622 cli_runner.go:164] Run: docker volume create ha-984158-m03 --label name.minikube.sigs.k8s.io=ha-984158-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:34:28.528837   67622 oci.go:103] Successfully created a docker volume ha-984158-m03
	I0919 22:34:28.528911   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m03 --entrypoint /usr/bin/test -v ha-984158-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:34:28.927062   67622 oci.go:107] Successfully prepared a docker volume ha-984158-m03
	I0919 22:34:28.927168   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:34:28.927199   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:34:28.927268   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:34:33.212737   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.285428249s)
	I0919 22:34:33.212770   67622 kic.go:203] duration metric: took 4.285569649s to extract preloaded images to volume ...
	W0919 22:34:33.212842   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:34:33.212868   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:34:33.212907   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:34:33.271794   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158-m03 --name ha-984158-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158-m03 --network ha-984158 --ip 192.168.49.4 --volume ha-984158-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:34:33.577096   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Running}}
	I0919 22:34:33.595112   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:33.615056   67622 cli_runner.go:164] Run: docker exec ha-984158-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:34:33.665241   67622 oci.go:144] the created container "ha-984158-m03" has a running status.
	I0919 22:34:33.665277   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa...
	I0919 22:34:34.167881   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:34:34.167925   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:34:34.195311   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:34.214983   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:34:34.215010   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:34:34.269287   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:34.290822   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:34.290917   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.310406   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.310629   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.310645   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:34.449392   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:34:34.449418   67622 ubuntu.go:182] provisioning hostname "ha-984158-m03"
	I0919 22:34:34.449477   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.470431   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.470643   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.470659   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m03 && echo "ha-984158-m03" | sudo tee /etc/hostname
	I0919 22:34:34.622394   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:34:34.622486   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.641997   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.642244   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.642262   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:34.780134   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:34.780169   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:34:34.780191   67622 ubuntu.go:190] setting up certificates
	I0919 22:34:34.780205   67622 provision.go:84] configureAuth start
	I0919 22:34:34.780271   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:34.799584   67622 provision.go:143] copyHostCerts
	I0919 22:34:34.799658   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:34:34.799692   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:34:34.799701   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:34:34.799769   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:34:34.799851   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:34:34.799870   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:34:34.799877   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:34:34.799904   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:34:34.799966   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:34:34.799983   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:34:34.799989   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:34:34.800012   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:34:34.800115   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m03 san=[127.0.0.1 192.168.49.4 ha-984158-m03 localhost minikube]
	I0919 22:34:34.944518   67622 provision.go:177] copyRemoteCerts
	I0919 22:34:34.944575   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:34.944606   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.963408   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.062939   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:35.063013   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:35.095527   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:35.095582   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:35.122809   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:35.122880   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:34:35.150023   67622 provision.go:87] duration metric: took 369.804514ms to configureAuth
	I0919 22:34:35.150056   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:35.150311   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:35.150452   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.170186   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:35.170414   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:35.170546   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:34:35.424872   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:34:35.424903   67622 machine.go:96] duration metric: took 1.1340482s to provisionDockerMachine
	I0919 22:34:35.424913   67622 client.go:171] duration metric: took 6.952229218s to LocalClient.Create
	I0919 22:34:35.424932   67622 start.go:167] duration metric: took 6.95228363s to libmachine.API.Create "ha-984158"
	I0919 22:34:35.424941   67622 start.go:293] postStartSetup for "ha-984158-m03" (driver="docker")
	I0919 22:34:35.424950   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:35.425005   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:35.425044   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.443122   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.542973   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:35.547045   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:35.547098   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:35.547140   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:35.547149   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:35.547164   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:34:35.547243   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:34:35.547346   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:34:35.547359   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:34:35.547461   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:35.557222   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:34:35.587487   67622 start.go:296] duration metric: took 162.532916ms for postStartSetup
	I0919 22:34:35.587898   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:35.605883   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:35.606188   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:35.606230   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.625506   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.719327   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:35.724945   67622 start.go:128] duration metric: took 7.25475977s to createHost
	I0919 22:34:35.724975   67622 start.go:83] releasing machines lock for "ha-984158-m03", held for 7.25495293s
	I0919 22:34:35.725066   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:35.746436   67622 out.go:179] * Found network options:
	I0919 22:34:35.748613   67622 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:34:35.750204   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750230   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750252   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750261   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:34:35.750333   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:34:35.750367   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.750414   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:35.750481   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.770785   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.771520   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:36.012617   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:36.017809   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:36.041480   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:36.041572   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:36.074662   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:34:36.074688   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:34:36.074719   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:36.074766   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:36.093544   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:36.107751   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:34:36.107801   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:34:36.123972   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:34:36.140690   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:34:36.213915   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:34:36.293890   67622 docker.go:234] disabling docker service ...
	I0919 22:34:36.293970   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:34:36.315495   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:34:36.329394   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:34:36.401603   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:34:36.566519   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:34:36.580168   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:36.598521   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:34:36.598580   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.612994   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:34:36.613052   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.625369   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.636513   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.647884   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:36.658467   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.670077   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.688463   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.700347   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:36.710192   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:36.722230   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.786818   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:34:36.889165   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:34:36.889244   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:34:36.893369   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:34:36.893434   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:34:36.897483   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:34:36.935462   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:34:36.935558   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:34:36.971682   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:34:37.011225   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:34:37.012939   67622 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:34:37.014619   67622 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:34:37.016609   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:37.035904   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:34:37.040209   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:37.053278   67622 mustload.go:65] Loading cluster: ha-984158
	I0919 22:34:37.053547   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:37.053803   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:34:37.073847   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:34:37.074139   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.4
	I0919 22:34:37.074157   67622 certs.go:194] generating shared ca certs ...
	I0919 22:34:37.074173   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.074282   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:34:37.074329   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:34:37.074340   67622 certs.go:256] generating profile certs ...
	I0919 22:34:37.074417   67622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:34:37.074441   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7
	I0919 22:34:37.074452   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:34:37.137117   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 ...
	I0919 22:34:37.137145   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7: {Name:mk19194d581061c0301a7ebaafcb4f75dd6f88da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.137332   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7 ...
	I0919 22:34:37.137346   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7: {Name:mkdc03dbd8fb2d6fc0a8ac2bb45b7aa14987fe74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.137418   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:34:37.137557   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:34:37.137679   67622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:34:37.137694   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:34:37.137706   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:34:37.137719   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:34:37.137732   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:34:37.137744   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:34:37.137756   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:34:37.137768   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:34:37.137780   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:34:37.137836   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:34:37.137865   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:34:37.137875   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:34:37.137895   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:34:37.137918   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:34:37.137950   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:34:37.137989   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:34:37.138014   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.138027   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.138042   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.138089   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:34:37.156562   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:34:37.245522   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:34:37.249874   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:34:37.263553   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:34:37.267840   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:34:37.282009   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:34:37.286008   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:34:37.299365   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:34:37.303011   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:34:37.316000   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:34:37.319968   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:34:37.335075   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:34:37.339209   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:34:37.352485   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:34:37.379736   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:34:37.405614   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:34:37.430819   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:34:37.457286   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:34:37.485582   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:34:37.511990   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:34:37.539620   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:34:37.566336   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:34:37.597966   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:34:37.624934   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:34:37.652281   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:34:37.672835   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:34:37.693826   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:34:37.712995   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:34:37.735150   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:34:37.755380   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:34:37.775695   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:34:37.796705   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:34:37.802715   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:34:37.814531   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.819194   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.819264   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.826904   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:34:37.838758   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:34:37.849465   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.853251   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.853305   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.860596   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:34:37.872602   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:34:37.885280   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.889622   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.889680   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.896943   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:34:37.908337   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:34:37.912368   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:34:37.912422   67622 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0919 22:34:37.912521   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:34:37.912549   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:34:37.912589   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:34:37.927225   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:37.927295   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:34:37.927349   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:34:37.937175   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:34:37.937241   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:34:37.946525   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:34:37.966151   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:34:37.991832   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:34:38.014409   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:34:38.018813   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:38.034487   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:38.100010   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:38.123308   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:34:38.123594   67622 start.go:317] joinCluster: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:38.123717   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:34:38.123769   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:34:38.144625   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:34:38.293340   67622 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:38.293387   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xvegph.tfd7m7k591l3snif --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:34:51.872651   67622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xvegph.tfd7m7k591l3snif --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (13.579238089s)
	I0919 22:34:51.872690   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:34:52.127072   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158-m03 minikube.k8s.io/updated_at=2025_09_19T22_34_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=false
	I0919 22:34:52.206869   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-984158-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:34:52.293044   67622 start.go:319] duration metric: took 14.169442875s to joinCluster
	I0919 22:34:52.293202   67622 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:52.293464   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:52.295014   67622 out.go:179] * Verifying Kubernetes components...
	I0919 22:34:52.296471   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:52.405642   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:52.419776   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:34:52.419840   67622 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:34:52.420054   67622 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m03" to be "Ready" ...
	W0919 22:34:54.424074   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:34:56.924240   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:34:58.925198   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:35:01.425329   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:35:03.923474   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	I0919 22:35:05.424225   67622 node_ready.go:49] node "ha-984158-m03" is "Ready"
	I0919 22:35:05.424253   67622 node_ready.go:38] duration metric: took 13.004161929s for node "ha-984158-m03" to be "Ready" ...
	I0919 22:35:05.424266   67622 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:35:05.424326   67622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:05.438342   67622 api_server.go:72] duration metric: took 13.14509411s to wait for apiserver process to appear ...
	I0919 22:35:05.438367   67622 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:35:05.438390   67622 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:35:05.442575   67622 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:35:05.443547   67622 api_server.go:141] control plane version: v1.34.0
	I0919 22:35:05.443573   67622 api_server.go:131] duration metric: took 5.19876ms to wait for apiserver health ...
	I0919 22:35:05.443582   67622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:35:05.452030   67622 system_pods.go:59] 24 kube-system pods found
	I0919 22:35:05.452062   67622 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:35:05.452067   67622 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:35:05.452073   67622 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:35:05.452079   67622 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:35:05.452084   67622 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:35:05.452089   67622 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:35:05.452094   67622 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:35:05.452129   67622 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:35:05.452136   67622 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:35:05.452141   67622 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:35:05.452146   67622 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:35:05.452151   67622 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:35:05.452156   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:35:05.452161   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:35:05.452165   67622 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:35:05.452170   67622 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:35:05.452174   67622 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:35:05.452179   67622 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:35:05.452184   67622 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:35:05.452188   67622 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:35:05.452193   67622 system_pods.go:61] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:35:05.452199   67622 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:35:05.452205   67622 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:35:05.452208   67622 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:35:05.452217   67622 system_pods.go:74] duration metric: took 8.62798ms to wait for pod list to return data ...
	I0919 22:35:05.452227   67622 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:35:05.455571   67622 default_sa.go:45] found service account: "default"
	I0919 22:35:05.455594   67622 default_sa.go:55] duration metric: took 3.361804ms for default service account to be created ...
	I0919 22:35:05.455603   67622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:35:05.460748   67622 system_pods.go:86] 24 kube-system pods found
	I0919 22:35:05.460777   67622 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:35:05.460783   67622 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:35:05.460787   67622 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:35:05.460790   67622 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:35:05.460793   67622 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:35:05.460798   67622 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:35:05.460801   67622 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:35:05.460803   67622 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:35:05.460806   67622 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:35:05.460809   67622 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:35:05.460812   67622 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:35:05.460815   67622 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:35:05.460818   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:35:05.460821   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:35:05.460826   67622 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:35:05.460829   67622 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:35:05.460832   67622 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:35:05.460835   67622 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:35:05.460838   67622 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:35:05.460841   67622 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:35:05.460844   67622 system_pods.go:89] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:35:05.460847   67622 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:35:05.460850   67622 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:35:05.460853   67622 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:35:05.460859   67622 system_pods.go:126] duration metric: took 5.251911ms to wait for k8s-apps to be running ...
	I0919 22:35:05.460866   67622 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:35:05.460906   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:35:05.475728   67622 system_svc.go:56] duration metric: took 14.850569ms WaitForService to wait for kubelet
	I0919 22:35:05.475767   67622 kubeadm.go:578] duration metric: took 13.182524274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:35:05.475791   67622 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:35:05.479992   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480016   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480028   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480032   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480035   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480038   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480042   67622 node_conditions.go:105] duration metric: took 4.246099ms to run NodePressure ...
	I0919 22:35:05.480052   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:35:05.480076   67622 start.go:255] writing updated cluster config ...
	I0919 22:35:05.480391   67622 ssh_runner.go:195] Run: rm -f paused
	I0919 22:35:05.484443   67622 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:35:05.484864   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:35:05.488632   67622 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gnbx" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.494158   67622 pod_ready.go:94] pod "coredns-66bc5c9577-5gnbx" is "Ready"
	I0919 22:35:05.494184   67622 pod_ready.go:86] duration metric: took 5.519921ms for pod "coredns-66bc5c9577-5gnbx" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.494194   67622 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ltjmz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.498979   67622 pod_ready.go:94] pod "coredns-66bc5c9577-ltjmz" is "Ready"
	I0919 22:35:05.499001   67622 pod_ready.go:86] duration metric: took 4.801852ms for pod "coredns-66bc5c9577-ltjmz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.501488   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.506605   67622 pod_ready.go:94] pod "etcd-ha-984158" is "Ready"
	I0919 22:35:05.506631   67622 pod_ready.go:86] duration metric: took 5.121241ms for pod "etcd-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.506643   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.511687   67622 pod_ready.go:94] pod "etcd-ha-984158-m02" is "Ready"
	I0919 22:35:05.511711   67622 pod_ready.go:86] duration metric: took 5.063338ms for pod "etcd-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.511721   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.686203   67622 request.go:683] "Waited before sending request" delay="174.390617ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-984158-m03"
	I0919 22:35:05.886318   67622 request.go:683] "Waited before sending request" delay="196.323175ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:05.889520   67622 pod_ready.go:94] pod "etcd-ha-984158-m03" is "Ready"
	I0919 22:35:05.889544   67622 pod_ready.go:86] duration metric: took 377.817661ms for pod "etcd-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.086145   67622 request.go:683] "Waited before sending request" delay="196.407438ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:35:06.090017   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.285426   67622 request.go:683] "Waited before sending request" delay="195.307128ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158"
	I0919 22:35:06.486234   67622 request.go:683] "Waited before sending request" delay="197.363102ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:06.489211   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158" is "Ready"
	I0919 22:35:06.489239   67622 pod_ready.go:86] duration metric: took 399.19471ms for pod "kube-apiserver-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.489249   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.685697   67622 request.go:683] "Waited before sending request" delay="196.373047ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158-m02"
	I0919 22:35:06.885918   67622 request.go:683] "Waited before sending request" delay="197.214097ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:06.888940   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158-m02" is "Ready"
	I0919 22:35:06.888966   67622 pod_ready.go:86] duration metric: took 399.709223ms for pod "kube-apiserver-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.888977   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.086320   67622 request.go:683] "Waited before sending request" delay="197.234187ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158-m03"
	I0919 22:35:07.286155   67622 request.go:683] "Waited before sending request" delay="196.391562ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:07.289116   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158-m03" is "Ready"
	I0919 22:35:07.289145   67622 pod_ready.go:86] duration metric: took 400.160627ms for pod "kube-apiserver-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.485647   67622 request.go:683] "Waited before sending request" delay="196.369215ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0919 22:35:07.489356   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.685801   67622 request.go:683] "Waited before sending request" delay="196.331241ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158"
	I0919 22:35:07.886175   67622 request.go:683] "Waited before sending request" delay="197.36953ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:07.889268   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158" is "Ready"
	I0919 22:35:07.889292   67622 pod_ready.go:86] duration metric: took 399.911799ms for pod "kube-controller-manager-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.889300   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.085780   67622 request.go:683] "Waited before sending request" delay="196.397628ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158-m02"
	I0919 22:35:08.286293   67622 request.go:683] "Waited before sending request" delay="197.157746ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:08.289542   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158-m02" is "Ready"
	I0919 22:35:08.289565   67622 pod_ready.go:86] duration metric: took 400.260559ms for pod "kube-controller-manager-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.289585   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.486054   67622 request.go:683] "Waited before sending request" delay="196.383406ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158-m03"
	I0919 22:35:08.685765   67622 request.go:683] "Waited before sending request" delay="196.365381ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:08.688911   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158-m03" is "Ready"
	I0919 22:35:08.688939   67622 pod_ready.go:86] duration metric: took 399.348524ms for pod "kube-controller-manager-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.885240   67622 request.go:683] "Waited before sending request" delay="196.197284ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:35:08.888653   67622 pod_ready.go:83] waiting for pod "kube-proxy-hdxxn" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.086194   67622 request.go:683] "Waited before sending request" delay="197.430633ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hdxxn"
	I0919 22:35:09.285936   67622 request.go:683] "Waited before sending request" delay="196.399441ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:09.289309   67622 pod_ready.go:94] pod "kube-proxy-hdxxn" is "Ready"
	I0919 22:35:09.289344   67622 pod_ready.go:86] duration metric: took 400.666867ms for pod "kube-proxy-hdxxn" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.289356   67622 pod_ready.go:83] waiting for pod "kube-proxy-k2drm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.485857   67622 request.go:683] "Waited before sending request" delay="196.368869ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k2drm"
	I0919 22:35:09.685224   67622 request.go:683] "Waited before sending request" delay="196.312304ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:09.688202   67622 pod_ready.go:94] pod "kube-proxy-k2drm" is "Ready"
	I0919 22:35:09.688225   67622 pod_ready.go:86] duration metric: took 398.86315ms for pod "kube-proxy-k2drm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.688232   67622 pod_ready.go:83] waiting for pod "kube-proxy-plrn2" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.885674   67622 request.go:683] "Waited before sending request" delay="197.37394ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-plrn2"
	I0919 22:35:10.085404   67622 request.go:683] "Waited before sending request" delay="196.238234ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:10.088413   67622 pod_ready.go:94] pod "kube-proxy-plrn2" is "Ready"
	I0919 22:35:10.088435   67622 pod_ready.go:86] duration metric: took 400.198021ms for pod "kube-proxy-plrn2" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.285955   67622 request.go:683] "Waited before sending request" delay="197.399738ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0919 22:35:10.289773   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.486274   67622 request.go:683] "Waited before sending request" delay="196.397415ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158"
	I0919 22:35:10.685865   67622 request.go:683] "Waited before sending request" delay="196.354476ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:10.688789   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158" is "Ready"
	I0919 22:35:10.688812   67622 pod_ready.go:86] duration metric: took 399.015441ms for pod "kube-scheduler-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.688821   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.886266   67622 request.go:683] "Waited before sending request" delay="197.365068ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158-m02"
	I0919 22:35:11.085685   67622 request.go:683] "Waited before sending request" delay="196.401015ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:11.088847   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158-m02" is "Ready"
	I0919 22:35:11.088884   67622 pod_ready.go:86] duration metric: took 400.056175ms for pod "kube-scheduler-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.088895   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.285309   67622 request.go:683] "Waited before sending request" delay="196.306548ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158-m03"
	I0919 22:35:11.485951   67622 request.go:683] "Waited before sending request" delay="197.396443ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:11.489000   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158-m03" is "Ready"
	I0919 22:35:11.489026   67622 pod_ready.go:86] duration metric: took 400.124566ms for pod "kube-scheduler-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.489036   67622 pod_ready.go:40] duration metric: took 6.004562578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:35:11.533521   67622 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:35:11.535265   67622 out.go:179] * Done! kubectl is now configured to use "ha-984158" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 22:33:59 ha-984158 crio[940]: time="2025-09-19 22:33:59.550284463Z" level=info msg="Starting container: ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a" id=e0a3358c-8796-408f-934f-d6cba020a690 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:33:59 ha-984158 crio[940]: time="2025-09-19 22:33:59.559054866Z" level=info msg="Started container" PID=2323 containerID=ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a description=kube-system/coredns-66bc5c9577-5gnbx/coredns id=e0a3358c-8796-408f-934f-d6cba020a690 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a67546437e6cd1431d56127b35c686ec4fbef541821d81e817187eac2eac44ae
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.844458340Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-rnjl7/POD" id=d0657219-f572-4248-9235-8842218cfa0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.844519430Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.863307191Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-rnjl7 Namespace:default ID:310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 UID:68cd1643-e7c7-480f-af91-8f2f4eafb766 NetNS:/var/run/netns/06be5280-8181-487d-a6d1-f625eae461d3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.863361143Z" level=info msg="Adding pod default_busybox-7b57f96db7-rnjl7 to CNI network \"kindnet\" (type=ptp)"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.877409166Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-rnjl7 Namespace:default ID:310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 UID:68cd1643-e7c7-480f-af91-8f2f4eafb766 NetNS:/var/run/netns/06be5280-8181-487d-a6d1-f625eae461d3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.877580199Z" level=info msg="Checking pod default_busybox-7b57f96db7-rnjl7 for CNI network kindnet (type=ptp)"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.878483692Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.879359170Z" level=info msg="Ran pod sandbox 310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 with infra container: default/busybox-7b57f96db7-rnjl7/POD" id=d0657219-f572-4248-9235-8842218cfa0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.880607012Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1735f4c5-1314-4a40-8ba8-c3ad07521ed5 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.880856313Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=1735f4c5-1314-4a40-8ba8-c3ad07521ed5 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.881636849Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=7ea2e14f-0929-48b6-8660-f50891d76427 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.882840066Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:35:13 ha-984158 crio[940]: time="2025-09-19 22:35:13.826935593Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.299818076Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=7ea2e14f-0929-48b6-8660-f50891d76427 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.300497300Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=93a0214d-e907-4422-9d10-19ea7fc4e56f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.301041675Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=93a0214d-e907-4422-9d10-19ea7fc4e56f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.301798545Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=0a8490eb-33d4-479b-9676-b4224390f69a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.302421301Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0a8490eb-33d4-479b-9676-b4224390f69a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.305168065Z" level=info msg="Creating container: default/busybox-7b57f96db7-rnjl7/busybox" id=3cab5b69-2469-4018-a242-e29452d9df66 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.305267569Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.380968697Z" level=info msg="Created container 9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e: default/busybox-7b57f96db7-rnjl7/busybox" id=3cab5b69-2469-4018-a242-e29452d9df66 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.381641384Z" level=info msg="Starting container: 9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e" id=796c6084-24c1-4536-af4f-844053cc1347 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.388597470Z" level=info msg="Started container" PID=2560 containerID=9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e description=default/busybox-7b57f96db7-rnjl7/busybox id=796c6084-24c1-4536-af4f-844053cc1347 name=/runtime.v1.RuntimeService/StartContainer sandboxID=310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9169b9b095a98       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   33 seconds ago       Running             busybox                   0                   310dd81aa6739       busybox-7b57f96db7-rnjl7
	ea03ecb87a050       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      About a minute ago   Running             coredns                   0                   a67546437e6cd       coredns-66bc5c9577-5gnbx
	d9aec8cde801c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       0                   f2f4dad3060cd       storage-provisioner
	7df7251c31862       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      About a minute ago   Running             coredns                   0                   549805b340720       coredns-66bc5c9577-ltjmz
	66e8ff6b4b2da       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      2 minutes ago        Running             kindnet-cni               0                   ca0bb4eb3a856       kindnet-rd882
	c90c0cf2d2e8d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      2 minutes ago        Running             kube-proxy                0                   6de94aa7ba9e1       kube-proxy-hdxxn
	6b6a81f4f6b23       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     2 minutes ago        Running             kube-vip                  0                   fba7b712cd4d4       kube-vip-ha-984158
	ccf53f9534beb       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      2 minutes ago        Running             kube-controller-manager   0                   15b128d3c6aed       kube-controller-manager-ha-984158
	01cd32d6daeeb       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      2 minutes ago        Running             kube-scheduler            0                   d854ebb188beb       kube-scheduler-ha-984158
	fda65fdd5e2b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      2 minutes ago        Running             etcd                      0                   9e61b75f9a67d       etcd-ha-984158
	8ed4a5888320b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      2 minutes ago        Running             kube-apiserver            0                   f7a2c4489feba       kube-apiserver-ha-984158
	
	
	==> coredns [7df7251c318624785e44160ab98a256321ca02663ac3f38b31058625169e65cf] <==
	[INFO] 10.244.1.2:34043 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.006963816s
	[INFO] 10.244.1.2:38425 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137951s
	[INFO] 10.244.2.2:51391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001353s
	[INFO] 10.244.2.2:50788 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010898214s
	[INFO] 10.244.2.2:57984 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165964s
	[INFO] 10.244.2.2:46802 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00010628s
	[INFO] 10.244.2.2:56859 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133945s
	[INFO] 10.244.0.4:44778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139187s
	[INFO] 10.244.0.4:52371 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149879s
	[INFO] 10.244.0.4:44391 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012178s
	[INFO] 10.244.0.4:42322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090724s
	[INFO] 10.244.1.2:47486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152861s
	[INFO] 10.244.1.2:33837 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197948s
	[INFO] 10.244.2.2:57569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187028s
	[INFO] 10.244.2.2:49299 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000201838s
	[INFO] 10.244.2.2:56021 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115909s
	[INFO] 10.244.0.4:58940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136946s
	[INFO] 10.244.0.4:36648 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142402s
	[INFO] 10.244.1.2:54958 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137478s
	[INFO] 10.244.1.2:49367 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111679s
	[INFO] 10.244.2.2:37477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176669s
	[INFO] 10.244.2.2:37006 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082361s
	[INFO] 10.244.0.4:52297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131909s
	[INFO] 10.244.0.4:59935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000069811s
	[INFO] 10.244.0.4:50031 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000124505s
	
	
	==> coredns [ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a] <==
	[INFO] 10.244.2.2:33714 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159773s
	[INFO] 10.244.2.2:40292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00009881s
	[INFO] 10.244.2.2:39630 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000811472s
	[INFO] 10.244.0.4:43002 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000112134s
	[INFO] 10.244.0.4:40782 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.000094347s
	[INFO] 10.244.1.2:36510 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033427373s
	[INFO] 10.244.1.2:41816 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158466s
	[INFO] 10.244.1.2:43260 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193529s
	[INFO] 10.244.2.2:48795 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161887s
	[INFO] 10.244.2.2:46683 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133363s
	[INFO] 10.244.2.2:56162 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135264s
	[INFO] 10.244.0.4:60293 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000085933s
	[INFO] 10.244.0.4:50296 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010728706s
	[INFO] 10.244.0.4:42098 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170789s
	[INFO] 10.244.0.4:50435 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154329s
	[INFO] 10.244.1.2:49298 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184582s
	[INFO] 10.244.1.2:58606 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110603s
	[INFO] 10.244.2.2:33122 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186581s
	[INFO] 10.244.0.4:51847 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155018s
	[INFO] 10.244.0.4:49360 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091433s
	[INFO] 10.244.1.2:44523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150525s
	[INFO] 10.244.1.2:48087 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154066s
	[INFO] 10.244.2.2:47219 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124336s
	[INFO] 10.244.2.2:58889 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148273s
	[INFO] 10.244.0.4:47101 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088754s
	
	
	==> describe nodes <==
	Name:               ha-984158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:33:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:35:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-984158
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 39160f7d8b9f44c18aede41e4d267fbd
	  System UUID:                e5418393-d7bf-429a-8ff0-9daee26920dd
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rnjl7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 coredns-66bc5c9577-5gnbx             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m2s
	  kube-system                 coredns-66bc5c9577-ltjmz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m2s
	  kube-system                 etcd-ha-984158                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m5s
	  kube-system                 kindnet-rd882                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m3s
	  kube-system                 kube-apiserver-ha-984158             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-ha-984158    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-proxy-hdxxn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-scheduler-ha-984158             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-vip-ha-984158                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m10s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m1s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m                     kube-proxy       
	  Normal  NodeHasSufficientPID     2m13s (x8 over 2m13s)  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m13s (x8 over 2m13s)  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m13s (x8 over 2m13s)  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m5s                   kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m5s                   kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m5s                   kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m4s                   node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  NodeReady                110s                   kubelet          Node ha-984158 status is now: NodeReady
	  Normal  RegisteredNode           96s                    node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           55s                    node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	
	
	Name:               ha-984158-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:35:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-984158-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 d32b005f3b5146359774fcbe4364b90b
	  System UUID:                370c0cbf-a33c-464e-aad2-0ef3d76b4ebb
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8s7jn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 etcd-ha-984158-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         93s
	  kube-system                 kindnet-th979                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      95s
	  kube-system                 kube-apiserver-ha-984158-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-controller-manager-ha-984158-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-proxy-plrn2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 kube-scheduler-ha-984158-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         93s
	  kube-system                 kube-vip-ha-984158-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         93s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        91s   kube-proxy       
	  Normal  RegisteredNode  94s   node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode  91s   node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	
	
	Name:               ha-984158-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:35:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:35:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-984158-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 038f6eff3d614d78917c49afbf40a4e7
	  System UUID:                a60f86ef-6d86-4217-85ca-ad02416ddc34
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c7qf4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 etcd-ha-984158-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         56s
	  kube-system                 kindnet-269nt                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      58s
	  kube-system                 kube-apiserver-ha-984158-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-controller-manager-ha-984158-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-proxy-k2drm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kube-system                 kube-scheduler-ha-984158-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         56s
	  kube-system                 kube-vip-ha-984158-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        54s   kube-proxy       
	  Normal  RegisteredNode  56s   node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode  55s   node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode  53s   node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	
	
	==> dmesg <==
	[  +0.103037] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029723] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.096733] kauditd_printk_skb: 47 callbacks suppressed
	[Sep19 22:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.041768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.022949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023825] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	
	
	==> etcd [fda65fdd5e2b890fe6940cd0f6b5afae54775a44a1e30b23dc514a1ea4a5dd4c] <==
	{"level":"info","ts":"2025-09-19T22:34:42.874829Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:42.880780Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"e8495135083f8257","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-19T22:34:42.880910Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:42.880949Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:42.904957Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:42.908392Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:43.233880Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(7185048267463743064 12593026477526642892 16737998778312655447)"}
	{"level":"info","ts":"2025-09-19T22:34:43.234252Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:43.234386Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:51.604205Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:35:02.111263Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:35:12.622830Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:35:12.851680Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"e8495135083f8257","bytes":1479617,"size":"1.5 MB","took":"30.017342016s"}
	{"level":"info","ts":"2025-09-19T22:35:40.335511Z","caller":"traceutil/trace.go:172","msg":"trace[580727823] transaction","detail":"{read_only:false; response_revision:1018; number_of_response:1; }","duration":"128.447767ms","start":"2025-09-19T22:35:40.207051Z","end":"2025-09-19T22:35:40.335498Z","steps":["trace[580727823] 'process raft request'  (duration: 128.303588ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:35:40.335758Z","caller":"traceutil/trace.go:172","msg":"trace[1969207353] linearizableReadLoop","detail":"{readStateIndex:1194; appliedIndex:1195; }","duration":"117.354033ms","start":"2025-09-19T22:35:40.218388Z","end":"2025-09-19T22:35:40.335742Z","steps":["trace[1969207353] 'read index received'  (duration: 117.348211ms)","trace[1969207353] 'applied index is now lower than readState.Index'  (duration: 4.715µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:35:40.335880Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.473932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:35:40.335910Z","caller":"traceutil/trace.go:172","msg":"trace[12563226] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:1018; }","duration":"117.51944ms","start":"2025-09-19T22:35:40.218383Z","end":"2025-09-19T22:35:40.335902Z","steps":["trace[12563226] 'agreement among raft nodes before linearized reading'  (duration: 117.444854ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:35:41.265249Z","caller":"traceutil/trace.go:172","msg":"trace[1252869991] linearizableReadLoop","detail":"{readStateIndex:1199; appliedIndex:1199; }","duration":"121.843359ms","start":"2025-09-19T22:35:41.143386Z","end":"2025-09-19T22:35:41.265229Z","steps":["trace[1252869991] 'read index received'  (duration: 121.835594ms)","trace[1252869991] 'applied index is now lower than readState.Index'  (duration: 6.337µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:35:41.398137Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.71266ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:35:41.398198Z","caller":"traceutil/trace.go:172","msg":"trace[1812653205] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:1020; }","duration":"254.803848ms","start":"2025-09-19T22:35:41.143376Z","end":"2025-09-19T22:35:41.398180Z","steps":["trace[1812653205] 'agreement among raft nodes before linearized reading'  (duration: 121.941063ms)","trace[1812653205] 'range keys from in-memory index tree'  (duration: 132.739969ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:35:41.398804Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.156113ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6221891540473536501 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.3\" mod_revision:996 > success:<request_put:<key:\"/registry/masterleases/192.168.49.3\" value_size:65 lease:6221891540473536499 >> failure:<>>","response":"size:16"}
	{"level":"warn","ts":"2025-09-19T22:35:41.658165Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e8495135083f8257","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"21.83656ms"}
	{"level":"warn","ts":"2025-09-19T22:35:41.658213Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"63b66b54cc365658","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"21.890877ms"}
	{"level":"warn","ts":"2025-09-19T22:35:41.659958Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.463182ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:35:41.660011Z","caller":"traceutil/trace.go:172","msg":"trace[1201229941] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1022; }","duration":"114.533322ms","start":"2025-09-19T22:35:41.545465Z","end":"2025-09-19T22:35:41.659998Z","steps":["trace[1201229941] 'range keys from in-memory index tree'  (duration: 114.424434ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:35:49 up  1:18,  0 users,  load average: 1.16, 0.61, 0.45
	Linux ha-984158 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [66e8ff6b4b2da8ea01c46a247aa4714a90f2ed1d2ba051443dc7790f7f9aa6d2] <==
	I0919 22:35:08.711495       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:35:18.711209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:18.711272       1 main.go:301] handling current node
	I0919 22:35:18.711291       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:35:18.711300       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:35:18.711536       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:35:18.711554       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:35:28.716289       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:28.716329       1 main.go:301] handling current node
	I0919 22:35:28.716350       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:35:28.716364       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:35:28.716578       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:35:28.716595       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:35:38.711253       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:35:38.711317       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:35:38.711571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:38.711585       1 main.go:301] handling current node
	I0919 22:35:38.711598       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:35:38.711602       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:35:48.710009       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:48.710041       1 main.go:301] handling current node
	I0919 22:35:48.710057       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:35:48.710061       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:35:48.710325       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:35:48.710351       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8ed4a5888320b17174d5fd3227517f4c634bc157381bb9771474bfa5169aab2f] <==
	I0919 22:33:40.990846       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0919 22:33:44.087265       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0919 22:33:44.098000       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 22:33:44.107869       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:33:45.993421       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:33:46.743338       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:33:46.796068       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:33:46.799874       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:34:55.461764       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:00.508368       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:35:16.679730       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50288: use of closed network connection
	E0919 22:35:16.855038       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50310: use of closed network connection
	E0919 22:35:17.030728       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50338: use of closed network connection
	E0919 22:35:17.243171       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50346: use of closed network connection
	E0919 22:35:17.421526       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50372: use of closed network connection
	E0919 22:35:17.591329       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50402: use of closed network connection
	E0919 22:35:17.761924       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50422: use of closed network connection
	E0919 22:35:17.931932       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50438: use of closed network connection
	E0919 22:35:18.091452       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50456: use of closed network connection
	E0919 22:35:18.368592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50480: use of closed network connection
	E0919 22:35:18.524781       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50484: use of closed network connection
	E0919 22:35:18.691736       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50510: use of closed network connection
	E0919 22:35:18.869219       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50534: use of closed network connection
	E0919 22:35:19.030842       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50552: use of closed network connection
	E0919 22:35:19.201169       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50566: use of closed network connection
	
	
	==> kube-controller-manager [ccf53f9534beb8a8c8742cb5e71e0540bfd9bc439877b525756c21d5eef9b422] <==
	I0919 22:33:45.991296       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:33:45.991359       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:33:45.991661       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:33:45.992619       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:33:45.992661       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:33:45.992715       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:33:45.992824       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:33:45.992860       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 22:33:45.992945       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158"
	I0919 22:33:45.992988       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0919 22:33:45.994081       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0919 22:33:45.994164       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:33:45.997463       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:33:46.000645       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 22:33:46.007588       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 22:33:46.014824       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:33:46.019019       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:34:00.995932       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0919 22:34:13.994601       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-f5gnl failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-f5gnl\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:34:14.552916       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-984158-m02\" does not exist"
	I0919 22:34:14.582362       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-984158-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:34:15.998546       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m02"
	I0919 22:34:51.526332       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-984158-m03\" does not exist"
	I0919 22:34:51.541723       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-984158-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:34:56.108424       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m03"
	
	
	==> kube-proxy [c90c0cf2d2e8d28017db69b5b6570bb146918d86f62813e08b6cf30633aabf39] <==
	I0919 22:33:48.275684       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:33:48.343595       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:33:48.444904       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:33:48.444958       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:33:48.445144       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:33:48.471588       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:33:48.471666       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:33:48.477726       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:33:48.478178       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:33:48.478219       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:33:48.480033       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:33:48.480053       1 config.go:200] "Starting service config controller"
	I0919 22:33:48.480068       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:33:48.480085       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:33:48.482031       1 config.go:309] "Starting node config controller"
	I0919 22:33:48.482049       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:33:48.482057       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:33:48.480508       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:33:48.482857       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:33:48.580234       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:33:48.582666       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:33:48.583733       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [01cd32d6daeeb8f86625ec5d90712811aa7cc0b7dee503e21a57e8bd093892cc] <==
	E0919 22:33:39.908093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:33:39.911081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:33:39.988409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 22:33:40.028297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:33:40.063508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:33:40.098835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:33:40.219678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 22:33:40.224737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:33:40.235874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:33:40.301093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0919 22:33:42.406311       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:34:14.584511       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-plrn2\": pod kube-proxy-plrn2 is already assigned to node \"ha-984158-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-plrn2" node="ha-984158-m02"
	E0919 22:34:14.584664       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-plrn2\": pod kube-proxy-plrn2 is already assigned to node \"ha-984158-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-plrn2"
	E0919 22:34:51.565644       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-k2drm\": pod kube-proxy-k2drm is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-k2drm" node="ha-984158-m03"
	E0919 22:34:51.565863       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 040bf3f7-8d97-4799-b3a2-12b57eec38ef(kube-system/kube-proxy-k2drm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-k2drm"
	E0919 22:34:51.565922       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-k2drm\": pod kube-proxy-k2drm is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-k2drm"
	E0919 22:34:51.565851       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tqv25\": pod kube-proxy-tqv25 is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tqv25" node="ha-984158-m03"
	E0919 22:34:51.565999       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 6db503ca-eaf1-4ffc-8418-f778e65529c9(kube-system/kube-proxy-tqv25) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-tqv25"
	E0919 22:34:51.565619       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gtv88\": pod kindnet-gtv88 is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-gtv88" node="ha-984158-m03"
	E0919 22:34:51.566066       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 2040513e-991f-4c82-9b69-1e3fa299841a(kube-system/kindnet-gtv88) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-gtv88"
	E0919 22:34:51.568208       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tqv25\": pod kube-proxy-tqv25 is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-tqv25"
	I0919 22:34:51.568393       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tqv25" node="ha-984158-m03"
	I0919 22:34:51.568363       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-k2drm" node="ha-984158-m03"
	E0919 22:34:51.568334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gtv88\": pod kindnet-gtv88 is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kindnet-gtv88"
	I0919 22:34:51.574210       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gtv88" node="ha-984158-m03"
	
	
	==> kubelet <==
	Sep 19 22:33:59 ha-984158 kubelet[1691]: I0919 22:33:59.998182    1691 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5gnbx" podStartSLOduration=12.998157511 podStartE2EDuration="12.998157511s" podCreationTimestamp="2025-09-19 22:33:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-19 22:33:59.997467911 +0000 UTC m=+16.173563701" watchObservedRunningTime="2025-09-19 22:33:59.998157511 +0000 UTC m=+16.174253301"
	Sep 19 22:34:03 ha-984158 kubelet[1691]: E0919 22:34:03.923273    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321243922999524  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:03 ha-984158 kubelet[1691]: E0919 22:34:03.923330    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321243922999524  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:13 ha-984158 kubelet[1691]: E0919 22:34:13.925320    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321253925085483  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:13 ha-984158 kubelet[1691]: E0919 22:34:13.925352    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321253925085483  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:23 ha-984158 kubelet[1691]: E0919 22:34:23.926790    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321263926568823  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:23 ha-984158 kubelet[1691]: E0919 22:34:23.926836    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321263926568823  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:33 ha-984158 kubelet[1691]: E0919 22:34:33.928784    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321273928474652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:33 ha-984158 kubelet[1691]: E0919 22:34:33.928816    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321273928474652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:43 ha-984158 kubelet[1691]: E0919 22:34:43.930936    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321283930660810  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:43 ha-984158 kubelet[1691]: E0919 22:34:43.931007    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321283930660810  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:53 ha-984158 kubelet[1691]: E0919 22:34:53.932414    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321293932160714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:53 ha-984158 kubelet[1691]: E0919 22:34:53.932450    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321293932160714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:03 ha-984158 kubelet[1691]: E0919 22:35:03.934355    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321303934004965  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:03 ha-984158 kubelet[1691]: E0919 22:35:03.934407    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321303934004965  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:12 ha-984158 kubelet[1691]: I0919 22:35:12.604999    1691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984pg\" (UniqueName: \"kubernetes.io/projected/68cd1643-e7c7-480f-af91-8f2f4eafb766-kube-api-access-984pg\") pod \"busybox-7b57f96db7-rnjl7\" (UID: \"68cd1643-e7c7-480f-af91-8f2f4eafb766\") " pod="default/busybox-7b57f96db7-rnjl7"
	Sep 19 22:35:13 ha-984158 kubelet[1691]: E0919 22:35:13.935689    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321313935476454  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:13 ha-984158 kubelet[1691]: E0919 22:35:13.935726    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321313935476454  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:19 ha-984158 kubelet[1691]: E0919 22:35:19.030824    1691 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40998->127.0.0.1:37933: write tcp 127.0.0.1:40998->127.0.0.1:37933: write: broken pipe
	Sep 19 22:35:23 ha-984158 kubelet[1691]: E0919 22:35:23.937510    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321323937255941  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:23 ha-984158 kubelet[1691]: E0919 22:35:23.937554    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321323937255941  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:33 ha-984158 kubelet[1691]: E0919 22:35:33.938855    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321333938596677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:33 ha-984158 kubelet[1691]: E0919 22:35:33.938899    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321333938596677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:43 ha-984158 kubelet[1691]: E0919 22:35:43.940553    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321343940230113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:43 ha-984158 kubelet[1691]: E0919 22:35:43.940595    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321343940230113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-984158 -n ha-984158
helpers_test.go:269: (dbg) Run:  kubectl --context ha-984158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (30.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --output json --alsologtostderr -v 5: exit status 7 (732.71985ms)

                                                
                                                
-- stdout --
	[{"Name":"ha-984158","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-984158-m02","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-984158-m03","Host":"Running","Kubelet":"Running","APIServer":"Running","Kubeconfig":"Configured","Worker":false},{"Name":"ha-984158-m04","Host":"Stopped","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Stopped","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:35:50.903072   80909 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:35:50.903191   80909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:35:50.903200   80909 out.go:374] Setting ErrFile to fd 2...
	I0919 22:35:50.903204   80909 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:35:50.903401   80909 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:35:50.903604   80909 out.go:368] Setting JSON to true
	I0919 22:35:50.903627   80909 mustload.go:65] Loading cluster: ha-984158
	I0919 22:35:50.903766   80909 notify.go:220] Checking for updates...
	I0919 22:35:50.904079   80909 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:35:50.904115   80909 status.go:174] checking status of ha-984158 ...
	I0919 22:35:50.904652   80909 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:35:50.926210   80909 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:35:50.926248   80909 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:35:50.926549   80909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:35:50.945375   80909 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:35:50.945635   80909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:35:50.945688   80909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:35:50.963532   80909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:35:51.058683   80909 ssh_runner.go:195] Run: systemctl --version
	I0919 22:35:51.063251   80909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:35:51.075397   80909 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:35:51.136000   80909 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:35:51.125871086 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:35:51.136801   80909 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:35:51.136837   80909 api_server.go:166] Checking apiserver status ...
	I0919 22:35:51.136884   80909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:51.150410   80909 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:35:51.161416   80909 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:35:51.161469   80909 ssh_runner.go:195] Run: ls
	I0919 22:35:51.165216   80909 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:35:51.170952   80909 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:35:51.170979   80909 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:35:51.170988   80909 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:35:51.171013   80909 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:35:51.171313   80909 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:35:51.190667   80909 status.go:371] ha-984158-m02 host status = "Running" (err=<nil>)
	I0919 22:35:51.190693   80909 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:35:51.190938   80909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:35:51.209737   80909 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:35:51.210016   80909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:35:51.210060   80909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:35:51.228466   80909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:35:51.323866   80909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:35:51.339844   80909 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:35:51.339881   80909 api_server.go:166] Checking apiserver status ...
	I0919 22:35:51.339920   80909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:51.351616   80909 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1376/cgroup
	W0919 22:35:51.361546   80909 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1376/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:35:51.361590   80909 ssh_runner.go:195] Run: ls
	I0919 22:35:51.365226   80909 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:35:51.370136   80909 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:35:51.370164   80909 status.go:463] ha-984158-m02 apiserver status = Running (err=<nil>)
	I0919 22:35:51.370174   80909 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:35:51.370193   80909 status.go:174] checking status of ha-984158-m03 ...
	I0919 22:35:51.370434   80909 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:35:51.390726   80909 status.go:371] ha-984158-m03 host status = "Running" (err=<nil>)
	I0919 22:35:51.390753   80909 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:35:51.391039   80909 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:35:51.408822   80909 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:35:51.409082   80909 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:35:51.409145   80909 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:35:51.428251   80909 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:35:51.524677   80909 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:35:51.536879   80909 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:35:51.536905   80909 api_server.go:166] Checking apiserver status ...
	I0919 22:35:51.536933   80909 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:51.548722   80909 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W0919 22:35:51.559175   80909 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:35:51.559228   80909 ssh_runner.go:195] Run: ls
	I0919 22:35:51.562966   80909 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:35:51.567187   80909 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:35:51.567211   80909 status.go:463] ha-984158-m03 apiserver status = Running (err=<nil>)
	I0919 22:35:51.567219   80909 status.go:176] ha-984158-m03 status: &{Name:ha-984158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:35:51.567237   80909 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:35:51.567486   80909 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:35:51.586592   80909 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:35:51.586634   80909 status.go:384] host is not running, skipping remaining checks
	I0919 22:35:51.586641   80909 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp testdata/cp-test.txt ha-984158:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158_ha-984158-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m02 "sudo cat /home/docker/cp-test_ha-984158_ha-984158-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158_ha-984158-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m03 "sudo cat /home/docker/cp-test_ha-984158_ha-984158-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158_ha-984158-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 cp ha-984158:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158_ha-984158-m04.txt: exit status 1 (145.861958ms)

                                                
                                                
** stderr ** 
	getting host: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 cp ha-984158:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158_ha-984158-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test_ha-984158_ha-984158-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test_ha-984158_ha-984158-m04.txt": exit status 1 (143.108628ms)

                                                
                                                
** stderr ** 
	ssh: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 \"sudo cat /home/docker/cp-test_ha-984158_ha-984158-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"Test file for checking file cp process",
+ 	"",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp testdata/cp-test.txt ha-984158-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m02:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m02_ha-984158.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158 "sudo cat /home/docker/cp-test_ha-984158-m02_ha-984158.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m02:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158-m02_ha-984158-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m03 "sudo cat /home/docker/cp-test_ha-984158-m02_ha-984158-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m02:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m02_ha-984158-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m02:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m02_ha-984158-m04.txt: exit status 1 (151.498445ms)

                                                
                                                
** stderr ** 
	getting host: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m02:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m02_ha-984158-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test_ha-984158-m02_ha-984158-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test_ha-984158-m02_ha-984158-m04.txt": exit status 1 (140.141085ms)

                                                
                                                
** stderr ** 
	ssh: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 \"sudo cat /home/docker/cp-test_ha-984158-m02_ha-984158-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"Test file for checking file cp process",
+ 	"",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp testdata/cp-test.txt ha-984158-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m03_ha-984158.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158 "sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m02 "sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt: exit status 1 (156.941751ms)

                                                
                                                
** stderr ** 
	getting host: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt": exit status 1 (152.273606ms)

                                                
                                                
** stderr ** 
	ssh: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 \"sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"Test file for checking file cp process",
+ 	"",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp testdata/cp-test.txt ha-984158-m04:/home/docker/cp-test.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 cp testdata/cp-test.txt ha-984158-m04:/home/docker/cp-test.txt: exit status 1 (143.601376ms)

                                                
                                                
** stderr ** 
	getting host: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 cp testdata/cp-test.txt ha-984158-m04:/home/docker/cp-test.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (144.348799ms)

                                                
                                                
** stderr ** 
	ssh: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"Test file for checking file cp process",
+ 	"",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m04.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m04.txt: exit status 1 (146.204971ms)

                                                
                                                
** stderr ** 
	getting host: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m04.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (153.44896ms)

                                                
                                                
** stderr ** 
	ssh: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:545: failed to read test file 'testdata/cp-test.txt' : open /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m04.txt: no such file or directory
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m04_ha-984158.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m04_ha-984158.txt: exit status 1 (158.382306ms)

                                                
                                                
** stderr ** 
	getting host: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m04_ha-984158.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (141.350575ms)

                                                
                                                
** stderr ** 
	ssh: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158 "sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158 "sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158.txt": exit status 1 (256.841583ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-984158-m04_ha-984158.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158 \"sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-984158-m04_ha-984158.txt: No such file or directory\r\n",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt: exit status 1 (176.008372ms)

                                                
                                                
** stderr ** 
	getting host: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (141.926207ms)

                                                
                                                
** stderr ** 
	ssh: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m02 "sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m02 "sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt": exit status 1 (273.081884ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m02 \"sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt: No such file or directory\r\n",
  )
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt
helpers_test.go:573: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt: exit status 1 (164.730958ms)

                                                
                                                
** stderr ** 
	getting host: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:578: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 "sudo cat /home/docker/cp-test.txt": exit status 1 (150.661222ms)

                                                
                                                
** stderr ** 
	ssh: "ha-984158-m04" is not running

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m04 \"sudo cat /home/docker/cp-test.txt\"" : exit status 1
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m03 "sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt"
helpers_test.go:551: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m03 "sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt": exit status 1 (263.217779ms)

                                                
                                                
-- stdout --
	cat: /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
helpers_test.go:556: failed to run an cp command. args "out/minikube-linux-amd64 -p ha-984158 ssh -n ha-984158-m03 \"sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt\"" : exit status 1
helpers_test.go:590: /testdata/cp-test.txt content mismatch (-want +got):
  string(
- 	"",
+ 	"cat: /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt: No such file or directory\r\n",
  )
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-984158
helpers_test.go:243: (dbg) docker inspect ha-984158:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	        "Created": "2025-09-19T22:33:24.996172492Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68186,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:33:25.030742493Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hosts",
	        "LogPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca-json.log",
	        "Name": "/ha-984158",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-984158:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-984158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	                "LowerDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-984158",
	                "Source": "/var/lib/docker/volumes/ha-984158/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-984158",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-984158",
	                "name.minikube.sigs.k8s.io": "ha-984158",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b35e3615d35b58bcec7825bb039821b1dfb6293e56fe1316d0ae491d5b3b0558",
	            "SandboxKey": "/var/run/docker/netns/b35e3615d35b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-984158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:4d:99:af:3d:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1b6c79ac61dbabfd8f1ce8959ab9a2616212ddaf4680b1bb2cc7b6f6005d0e",
	                    "EndpointID": "150c15de67a84040f10d82e99ed82c2230b34908474820017c5633e8a5513d79",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-984158",
	                        "0e7c4b5cff2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-984158 -n ha-984158
helpers_test.go:252: <<< TestMultiControlPlane/serial/CopyFile FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/CopyFile]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 logs -n 25: (1.205254954s)
helpers_test.go:260: TestMultiControlPlane/serial/CopyFile logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m03.txt │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m03_ha-984158.txt                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158.txt                                                 │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp testdata/cp-test.txt ha-984158-m04:/home/docker/cp-test.txt                                                             │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m04.txt │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m04_ha-984158.txt                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158.txt                                                 │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:33:19
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:33:19.901060   67622 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:19.901185   67622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:19.901193   67622 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:19.901198   67622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:19.901448   67622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:33:19.902017   67622 out.go:368] Setting JSON to false
	I0919 22:33:19.903166   67622 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4550,"bootTime":1758316650,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:33:19.903283   67622 start.go:140] virtualization: kvm guest
	I0919 22:33:19.906578   67622 out.go:179] * [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:33:19.908489   67622 notify.go:220] Checking for updates...
	I0919 22:33:19.908508   67622 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:33:19.910361   67622 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:33:19.912958   67622 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:33:19.914823   67622 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:33:19.919772   67622 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:33:19.921444   67622 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:33:19.923242   67622 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:33:19.947549   67622 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:33:19.947649   67622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:20.004707   67622 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:33:19.994191177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:20.004832   67622 docker.go:318] overlay module found
	I0919 22:33:20.006907   67622 out.go:179] * Using the docker driver based on user configuration
	I0919 22:33:20.008195   67622 start.go:304] selected driver: docker
	I0919 22:33:20.008214   67622 start.go:918] validating driver "docker" against <nil>
	I0919 22:33:20.008227   67622 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:33:20.008818   67622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:20.067697   67622 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:33:20.055128215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:20.067871   67622 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:33:20.068167   67622 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:33:20.070129   67622 out.go:179] * Using Docker driver with root privileges
	I0919 22:33:20.071439   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:20.071513   67622 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:33:20.071523   67622 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:33:20.071600   67622 start.go:348] cluster config:
	{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:20.073188   67622 out.go:179] * Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	I0919 22:33:20.074628   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:33:20.076439   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:33:20.078066   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:20.078159   67622 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:33:20.078159   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:33:20.078174   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:33:20.078333   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:33:20.078348   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:33:20.078744   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:20.078777   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json: {Name:mk745b6092cc48756321ca371e559184d12db2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:20.100036   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:33:20.100059   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:33:20.100081   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:33:20.100133   67622 start.go:360] acquireMachinesLock for ha-984158: {Name:mkc72a6d4fef468a73a10e88f019b77c34dadd97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:33:20.100248   67622 start.go:364] duration metric: took 93.303µs to acquireMachinesLock for "ha-984158"
	I0919 22:33:20.100277   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:20.100380   67622 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:33:20.103382   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:33:20.103623   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:33:20.103675   67622 client.go:168] LocalClient.Create starting
	I0919 22:33:20.103751   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:33:20.103785   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:20.103799   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:20.103860   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:33:20.103880   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:20.103895   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:20.104259   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:33:20.122340   67622 cli_runner.go:211] docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:33:20.122418   67622 network_create.go:284] running [docker network inspect ha-984158] to gather additional debugging logs...
	I0919 22:33:20.122455   67622 cli_runner.go:164] Run: docker network inspect ha-984158
	W0919 22:33:20.139578   67622 cli_runner.go:211] docker network inspect ha-984158 returned with exit code 1
	I0919 22:33:20.139605   67622 network_create.go:287] error running [docker network inspect ha-984158]: docker network inspect ha-984158: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-984158 not found
	I0919 22:33:20.139623   67622 network_create.go:289] output of [docker network inspect ha-984158]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-984158 not found
	
	** /stderr **
	I0919 22:33:20.139738   67622 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:20.159001   67622 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b807f0}
	I0919 22:33:20.159067   67622 network_create.go:124] attempt to create docker network ha-984158 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:33:20.159151   67622 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-984158 ha-984158
	I0919 22:33:20.220465   67622 network_create.go:108] docker network ha-984158 192.168.49.0/24 created
	I0919 22:33:20.220505   67622 kic.go:121] calculated static IP "192.168.49.2" for the "ha-984158" container
	I0919 22:33:20.220576   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:33:20.238299   67622 cli_runner.go:164] Run: docker volume create ha-984158 --label name.minikube.sigs.k8s.io=ha-984158 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:33:20.257860   67622 oci.go:103] Successfully created a docker volume ha-984158
	I0919 22:33:20.258049   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158 --entrypoint /usr/bin/test -v ha-984158:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:33:20.650160   67622 oci.go:107] Successfully prepared a docker volume ha-984158
	I0919 22:33:20.650207   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:20.650234   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:33:20.650319   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:33:24.923696   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.273335756s)
	I0919 22:33:24.923745   67622 kic.go:203] duration metric: took 4.273508289s to extract preloaded images to volume ...
	W0919 22:33:24.923837   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:33:24.923868   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:33:24.923905   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:33:24.980440   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158 --name ha-984158 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158 --network ha-984158 --ip 192.168.49.2 --volume ha-984158:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:33:25.243904   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Running}}
	I0919 22:33:25.262964   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:25.282632   67622 cli_runner.go:164] Run: docker exec ha-984158 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:33:25.335702   67622 oci.go:144] the created container "ha-984158" has a running status.
	I0919 22:33:25.335743   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa...
	I0919 22:33:26.151425   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:33:26.151477   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:33:26.176792   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:26.194873   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:33:26.194911   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:33:26.242371   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:26.260832   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:33:26.260926   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.280776   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.281060   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.281074   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:33:26.419419   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:33:26.419451   67622 ubuntu.go:182] provisioning hostname "ha-984158"
	I0919 22:33:26.419523   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.438011   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.438316   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.438334   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158 && echo "ha-984158" | sudo tee /etc/hostname
	I0919 22:33:26.587806   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:33:26.587878   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.606861   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.607093   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.607134   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:33:26.743969   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:33:26.744008   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:33:26.744055   67622 ubuntu.go:190] setting up certificates
	I0919 22:33:26.744068   67622 provision.go:84] configureAuth start
	I0919 22:33:26.744152   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:26.765302   67622 provision.go:143] copyHostCerts
	I0919 22:33:26.765368   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:26.765405   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:33:26.765414   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:26.765489   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:33:26.765575   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:26.765596   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:33:26.765600   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:26.765626   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:33:26.765682   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:26.765696   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:33:26.765702   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:26.765725   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:33:26.765773   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158 san=[127.0.0.1 192.168.49.2 ha-984158 localhost minikube]
	I0919 22:33:27.052522   67622 provision.go:177] copyRemoteCerts
	I0919 22:33:27.052586   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:33:27.052619   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.077750   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.179645   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:33:27.179718   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:33:27.210288   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:33:27.210351   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:33:27.238586   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:33:27.238648   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:33:27.264405   67622 provision.go:87] duration metric: took 520.31998ms to configureAuth
	I0919 22:33:27.264432   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:33:27.264630   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:27.264744   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.284923   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:27.285168   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:27.285188   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:33:27.533206   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:33:27.533232   67622 machine.go:96] duration metric: took 1.272377771s to provisionDockerMachine
	I0919 22:33:27.533245   67622 client.go:171] duration metric: took 7.429561262s to LocalClient.Create
	I0919 22:33:27.533269   67622 start.go:167] duration metric: took 7.429646395s to libmachine.API.Create "ha-984158"
	I0919 22:33:27.533281   67622 start.go:293] postStartSetup for "ha-984158" (driver="docker")
	I0919 22:33:27.533292   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:33:27.533378   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:33:27.533430   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.551574   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.651298   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:33:27.655006   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:33:27.655037   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:33:27.655045   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:33:27.655051   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:33:27.655070   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:33:27.655147   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:33:27.655229   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:33:27.655238   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:33:27.655339   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:33:27.664695   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:27.695230   67622 start.go:296] duration metric: took 161.927495ms for postStartSetup
	I0919 22:33:27.695585   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:27.713847   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:27.714141   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:27.714182   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.735921   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.829368   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:33:27.833923   67622 start.go:128] duration metric: took 7.733528511s to createHost
	I0919 22:33:27.833953   67622 start.go:83] releasing machines lock for "ha-984158", held for 7.733693746s
	I0919 22:33:27.834022   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:27.851363   67622 ssh_runner.go:195] Run: cat /version.json
	I0919 22:33:27.851382   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:33:27.851422   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.851435   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.870773   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.871172   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:28.037834   67622 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:28.042707   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:33:28.184533   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:33:28.189494   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:28.213778   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:33:28.213869   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:28.245273   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:33:28.245311   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:33:28.245342   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:33:28.245409   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:33:28.260712   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:33:28.273221   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:33:28.273285   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:33:28.287690   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:33:28.303163   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:33:28.371756   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:33:28.449427   67622 docker.go:234] disabling docker service ...
	I0919 22:33:28.449499   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:33:28.467447   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:33:28.481298   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:33:28.558342   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:33:28.661953   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:33:28.675151   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:33:28.695465   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:33:28.695540   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.709844   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:33:28.709908   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.720817   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.731627   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.742506   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:33:28.753955   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.765830   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.784178   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.795285   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:33:28.804935   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:33:28.814326   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:28.918546   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:33:29.014541   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:33:29.014608   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:33:29.018746   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:33:29.018808   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:33:29.023643   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:33:29.059951   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:33:29.060029   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:29.098887   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:29.139500   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:33:29.141059   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:29.158455   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:33:29.162464   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:29.175140   67622 kubeadm.go:875] updating cluster {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Soc
ketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:33:29.175280   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:29.175333   67622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:33:29.248936   67622 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:33:29.248961   67622 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:33:29.249018   67622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:33:29.287448   67622 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:33:29.287472   67622 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:33:29.287479   67622 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:33:29.287577   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:33:29.287645   67622 ssh_runner.go:195] Run: crio config
	I0919 22:33:29.333242   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:29.333266   67622 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:33:29.333277   67622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:33:29.333307   67622 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-984158 NodeName:ha-984158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:33:29.333435   67622 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-984158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:33:29.333463   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:33:29.333506   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:33:29.346933   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:29.347143   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:33:29.347207   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:33:29.356691   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:33:29.356785   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:33:29.366595   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0919 22:33:29.386942   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:33:29.409639   67622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0919 22:33:29.428838   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:33:29.449681   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:33:29.453679   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:29.465645   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:29.534315   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:33:29.558739   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.2
	I0919 22:33:29.558767   67622 certs.go:194] generating shared ca certs ...
	I0919 22:33:29.558787   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:29.558925   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:33:29.558985   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:33:29.559000   67622 certs.go:256] generating profile certs ...
	I0919 22:33:29.559069   67622 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:33:29.559085   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt with IP's: []
	I0919 22:33:30.287530   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt ...
	I0919 22:33:30.287574   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt: {Name:mk4722cc3499628a90845973a8533bb2f9abaeaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.287824   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key ...
	I0919 22:33:30.287842   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key: {Name:mk95f513fb24356a441cd3443b0c241a35c61186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.287965   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f
	I0919 22:33:30.287986   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:33:30.489410   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f ...
	I0919 22:33:30.489443   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f: {Name:mk50e3acb42d56649151d2b237558cdb8ee1e1f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.489635   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f ...
	I0919 22:33:30.489654   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f: {Name:mke306934752782de0837143fc2872d74f6e5eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.489765   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:33:30.489897   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:33:30.489990   67622 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:33:30.490013   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt with IP's: []
	I0919 22:33:30.692692   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt ...
	I0919 22:33:30.692725   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt: {Name:mkec855f3fc5cc887af952272036f6b6db6c122d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.692913   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key ...
	I0919 22:33:30.692929   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key: {Name:mk41b934f9d330e25cbaab5814efeb52422665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.693033   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:33:30.693058   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:33:30.693082   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:33:30.693113   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:33:30.693131   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:33:30.693163   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:33:30.693182   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:33:30.693202   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:33:30.693280   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:33:30.693327   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:33:30.693343   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:33:30.693379   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:33:30.693413   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:33:30.693444   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:33:30.693498   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:30.693554   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:33:30.693575   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:30.693594   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:33:30.694169   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:33:30.721034   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:33:30.747256   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:33:30.773231   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:33:30.799758   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:33:30.825801   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:33:30.852404   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:33:30.879195   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:33:30.905339   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:33:30.934694   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:33:30.960677   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:33:30.987763   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:33:31.008052   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:33:31.014839   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:33:31.025609   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.029511   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.029570   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.036708   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:33:31.047387   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:33:31.058096   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.062519   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.062579   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.070083   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:33:31.080599   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:33:31.091228   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.095407   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.095480   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.102644   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:33:31.114044   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:33:31.118226   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:33:31.118374   67622 kubeadm.go:392] StartCluster: {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:31.118467   67622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:33:31.118521   67622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:33:31.155950   67622 cri.go:89] found id: ""
	I0919 22:33:31.156024   67622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:33:31.166037   67622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:33:31.175817   67622 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:33:31.175867   67622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:33:31.185690   67622 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:33:31.185707   67622 kubeadm.go:157] found existing configuration files:
	
	I0919 22:33:31.185748   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:33:31.195069   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:33:31.195184   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:33:31.204614   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:33:31.216208   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:33:31.216271   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:33:31.226344   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:33:31.239080   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:33:31.239168   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:33:31.248993   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:33:31.258113   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:33:31.258175   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:33:31.267147   67622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:33:31.307922   67622 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:33:31.308018   67622 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:33:31.323647   67622 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:33:31.323774   67622 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:33:31.323839   67622 kubeadm.go:310] OS: Linux
	I0919 22:33:31.323926   67622 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:33:31.324015   67622 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:33:31.324149   67622 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:33:31.324222   67622 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:33:31.324293   67622 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:33:31.324356   67622 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:33:31.324417   67622 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:33:31.324484   67622 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:33:31.377266   67622 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:33:31.377414   67622 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:33:31.377573   67622 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:33:31.384351   67622 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:33:31.386660   67622 out.go:252]   - Generating certificates and keys ...
	I0919 22:33:31.386732   67622 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:33:31.386811   67622 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:33:31.789403   67622 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:33:31.939575   67622 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:33:32.401218   67622 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:33:32.595052   67622 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:33:33.118331   67622 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:33:33.118543   67622 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-984158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:33:34.059417   67622 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:33:34.059600   67622 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-984158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:33:34.382200   67622 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:33:34.860984   67622 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:33:34.940846   67622 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:33:34.940919   67622 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:33:35.161325   67622 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:33:35.301598   67622 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:33:35.610006   67622 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:33:35.767736   67622 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:33:36.001912   67622 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:33:36.002376   67622 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:33:36.005697   67622 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:33:36.010843   67622 out.go:252]   - Booting up control plane ...
	I0919 22:33:36.010955   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:33:36.011044   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:33:36.011162   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:33:36.018352   67622 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:33:36.018463   67622 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:33:36.024835   67622 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:33:36.025002   67622 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:33:36.025072   67622 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:33:36.099408   67622 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:33:36.099593   67622 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:33:37.100521   67622 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001186505s
	I0919 22:33:37.103674   67622 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:33:37.103813   67622 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:33:37.103961   67622 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:33:37.104092   67622 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:33:38.781776   67622 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.678113429s
	I0919 22:33:39.011334   67622 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 1.907735584s
	I0919 22:33:43.273677   67622 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.17006372s
	I0919 22:33:43.285923   67622 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:33:43.298989   67622 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:33:43.310631   67622 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:33:43.310870   67622 kubeadm.go:310] [mark-control-plane] Marking the node ha-984158 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:33:43.319951   67622 kubeadm.go:310] [bootstrap-token] Using token: wc3lep.4w3ocubibd25hbwe
	I0919 22:33:43.321976   67622 out.go:252]   - Configuring RBAC rules ...
	I0919 22:33:43.322154   67622 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:33:43.325670   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:33:43.333517   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:33:43.338509   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:33:43.342046   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:33:43.345237   67622 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:33:43.680686   67622 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:33:44.099041   67622 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:33:44.680531   67622 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:33:44.681480   67622 kubeadm.go:310] 
	I0919 22:33:44.681572   67622 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:33:44.681591   67622 kubeadm.go:310] 
	I0919 22:33:44.681690   67622 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:33:44.681708   67622 kubeadm.go:310] 
	I0919 22:33:44.681761   67622 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:33:44.681854   67622 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:33:44.681910   67622 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:33:44.681916   67622 kubeadm.go:310] 
	I0919 22:33:44.681968   67622 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:33:44.681978   67622 kubeadm.go:310] 
	I0919 22:33:44.682015   67622 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:33:44.682021   67622 kubeadm.go:310] 
	I0919 22:33:44.682066   67622 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:33:44.682162   67622 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:33:44.682244   67622 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:33:44.682258   67622 kubeadm.go:310] 
	I0919 22:33:44.682378   67622 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:33:44.682497   67622 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:33:44.682510   67622 kubeadm.go:310] 
	I0919 22:33:44.682620   67622 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wc3lep.4w3ocubibd25hbwe \
	I0919 22:33:44.682733   67622 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 \
	I0919 22:33:44.682757   67622 kubeadm.go:310] 	--control-plane 
	I0919 22:33:44.682761   67622 kubeadm.go:310] 
	I0919 22:33:44.682837   67622 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:33:44.682844   67622 kubeadm.go:310] 
	I0919 22:33:44.682919   67622 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wc3lep.4w3ocubibd25hbwe \
	I0919 22:33:44.683036   67622 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 
	I0919 22:33:44.685970   67622 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:33:44.686071   67622 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:33:44.686097   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:44.686119   67622 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:33:44.688616   67622 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:33:44.690471   67622 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:33:44.695364   67622 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:33:44.695381   67622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:33:44.715791   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:33:44.939557   67622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:33:44.939639   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:44.939678   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158 minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=true
	I0919 22:33:45.023827   67622 ops.go:34] apiserver oom_adj: -16
	I0919 22:33:45.023957   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:45.524455   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:46.024018   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:46.524600   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.024332   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.524121   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.592879   67622 kubeadm.go:1105] duration metric: took 2.653303844s to wait for elevateKubeSystemPrivileges
	I0919 22:33:47.592920   67622 kubeadm.go:394] duration metric: took 16.47455539s to StartCluster
	I0919 22:33:47.592944   67622 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:47.593012   67622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:33:47.593661   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:47.593878   67622 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:47.593899   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:33:47.593915   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:33:47.593910   67622 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:33:47.593968   67622 addons.go:69] Setting storage-provisioner=true in profile "ha-984158"
	I0919 22:33:47.593987   67622 addons.go:238] Setting addon storage-provisioner=true in "ha-984158"
	I0919 22:33:47.594014   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:47.594020   67622 addons.go:69] Setting default-storageclass=true in profile "ha-984158"
	I0919 22:33:47.594052   67622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-984158"
	I0919 22:33:47.594180   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:47.594397   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.594490   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.616114   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:33:47.616790   67622 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:33:47.616815   67622 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:33:47.616821   67622 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:33:47.616827   67622 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:33:47.616832   67622 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:33:47.616874   67622 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:33:47.617290   67622 addons.go:238] Setting addon default-storageclass=true in "ha-984158"
	I0919 22:33:47.617334   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:47.617664   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.618198   67622 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:33:47.619811   67622 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:33:47.619828   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:33:47.619873   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:47.639214   67622 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:33:47.639233   67622 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:33:47.639292   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:47.639429   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:47.661245   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:47.673462   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:33:47.757401   67622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:33:47.772885   67622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:33:47.832329   67622 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:33:48.046946   67622 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:33:48.048036   67622 addons.go:514] duration metric: took 454.124749ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:33:48.048079   67622 start.go:246] waiting for cluster config update ...
	I0919 22:33:48.048094   67622 start.go:255] writing updated cluster config ...
	I0919 22:33:48.049801   67622 out.go:203] 
	I0919 22:33:48.051165   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:48.051243   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:48.053137   67622 out.go:179] * Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	I0919 22:33:48.054674   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:33:48.056311   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:33:48.057779   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:48.057806   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:33:48.057888   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:33:48.057928   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:33:48.057940   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:33:48.058063   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:48.078572   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:33:48.078592   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:33:48.078612   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:33:48.078641   67622 start.go:360] acquireMachinesLock for ha-984158-m02: {Name:mk33ccd18791cf0a87d18f7af68677fa10224c04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:33:48.078744   67622 start.go:364] duration metric: took 83.645µs to acquireMachinesLock for "ha-984158-m02"
	I0919 22:33:48.078773   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:48.078850   67622 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:33:48.081555   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:33:48.081669   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:33:48.081703   67622 client.go:168] LocalClient.Create starting
	I0919 22:33:48.081781   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:33:48.081822   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:48.081843   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:48.081910   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:33:48.081940   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:48.081960   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:48.082241   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:48.099940   67622 network_create.go:77] Found existing network {name:ha-984158 subnet:0xc0016638f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:33:48.099978   67622 kic.go:121] calculated static IP "192.168.49.3" for the "ha-984158-m02" container
	I0919 22:33:48.100047   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:33:48.119768   67622 cli_runner.go:164] Run: docker volume create ha-984158-m02 --label name.minikube.sigs.k8s.io=ha-984158-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:33:48.140861   67622 oci.go:103] Successfully created a docker volume ha-984158-m02
	I0919 22:33:48.140948   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m02 --entrypoint /usr/bin/test -v ha-984158-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:33:48.564029   67622 oci.go:107] Successfully prepared a docker volume ha-984158-m02
	I0919 22:33:48.564088   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:48.564128   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:33:48.564199   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:33:52.827364   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.263115206s)
	I0919 22:33:52.827395   67622 kic.go:203] duration metric: took 4.263265347s to extract preloaded images to volume ...
	W0919 22:33:52.827486   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:33:52.827514   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:33:52.827554   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:33:52.885075   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158-m02 --name ha-984158-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158-m02 --network ha-984158 --ip 192.168.49.3 --volume ha-984158-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:33:53.180687   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Running}}
	I0919 22:33:53.199679   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.219636   67622 cli_runner.go:164] Run: docker exec ha-984158-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:33:53.277586   67622 oci.go:144] the created container "ha-984158-m02" has a running status.
	I0919 22:33:53.277613   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa...
	I0919 22:33:53.439379   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:33:53.439435   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:33:53.481669   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.502631   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:33:53.502661   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:33:53.550818   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.569934   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:33:53.570033   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.591163   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.591567   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.591594   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:33:53.732425   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:33:53.732454   67622 ubuntu.go:182] provisioning hostname "ha-984158-m02"
	I0919 22:33:53.732620   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.753544   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.753771   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.753787   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m02 && echo "ha-984158-m02" | sudo tee /etc/hostname
	I0919 22:33:53.905778   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:33:53.905859   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.925947   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.926237   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.926262   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:33:54.064017   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:33:54.064058   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:33:54.064091   67622 ubuntu.go:190] setting up certificates
	I0919 22:33:54.064128   67622 provision.go:84] configureAuth start
	I0919 22:33:54.064205   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:54.083365   67622 provision.go:143] copyHostCerts
	I0919 22:33:54.083408   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:54.083437   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:33:54.083446   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:54.083518   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:33:54.083599   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:54.083619   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:33:54.083625   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:54.083651   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:33:54.083695   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:54.083712   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:33:54.083718   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:54.083741   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:33:54.083825   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m02 san=[127.0.0.1 192.168.49.3 ha-984158-m02 localhost minikube]
	I0919 22:33:54.283812   67622 provision.go:177] copyRemoteCerts
	I0919 22:33:54.283869   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:33:54.283908   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.302357   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:54.401996   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:33:54.402067   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:33:54.430462   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:33:54.430540   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:33:54.457015   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:33:54.457097   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:33:54.483980   67622 provision.go:87] duration metric: took 419.834494ms to configureAuth
	I0919 22:33:54.484006   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:33:54.484189   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:54.484291   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.502801   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:54.503005   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:54.503020   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:33:54.741937   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:33:54.741974   67622 machine.go:96] duration metric: took 1.172016504s to provisionDockerMachine
	I0919 22:33:54.741989   67622 client.go:171] duration metric: took 6.660276334s to LocalClient.Create
	I0919 22:33:54.742015   67622 start.go:167] duration metric: took 6.660346483s to libmachine.API.Create "ha-984158"
	I0919 22:33:54.742030   67622 start.go:293] postStartSetup for "ha-984158-m02" (driver="docker")
	I0919 22:33:54.742043   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:33:54.742141   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:33:54.742204   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.760779   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:54.861057   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:33:54.864884   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:33:54.864926   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:33:54.864936   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:33:54.864942   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:33:54.864952   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:33:54.865018   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:33:54.865096   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:33:54.865119   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:33:54.865208   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:33:54.874518   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:54.902675   67622 start.go:296] duration metric: took 160.632418ms for postStartSetup
	I0919 22:33:54.903619   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:54.921915   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:54.922275   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:54.922332   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.939498   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.032204   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:33:55.036544   67622 start.go:128] duration metric: took 6.957677622s to createHost
	I0919 22:33:55.036576   67622 start.go:83] releasing machines lock for "ha-984158-m02", held for 6.957813538s
	I0919 22:33:55.036645   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:55.056621   67622 out.go:179] * Found network options:
	I0919 22:33:55.058171   67622 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:33:55.059521   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:33:55.059575   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:33:55.059642   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:33:55.059693   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:55.059730   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:33:55.059795   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:55.079269   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.079505   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.307919   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:33:55.312965   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:55.336548   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:33:55.336628   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:55.368875   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:33:55.368896   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:33:55.368929   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:33:55.368975   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:33:55.384084   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:33:55.396627   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:33:55.396684   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:33:55.411878   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:33:55.426921   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:33:55.498750   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:33:55.574511   67622 docker.go:234] disabling docker service ...
	I0919 22:33:55.574592   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:33:55.592451   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:33:55.605407   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:33:55.676576   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:33:55.779960   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:33:55.791691   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:33:55.810222   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:33:55.810287   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.823669   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:33:55.823742   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.835957   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.848163   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.862113   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:33:55.874185   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.886226   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.904556   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.915914   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:33:55.925425   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:33:55.934730   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:56.048946   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:33:56.146544   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:33:56.146625   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:33:56.150812   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:33:56.150868   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:33:56.155192   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:33:56.191696   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:33:56.191785   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:56.233991   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:56.274090   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:33:56.275720   67622 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:33:56.276812   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:56.294583   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:33:56.298596   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:56.311418   67622 mustload.go:65] Loading cluster: ha-984158
	I0919 22:33:56.311645   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:56.311889   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:56.330141   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:56.330381   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.3
	I0919 22:33:56.330391   67622 certs.go:194] generating shared ca certs ...
	I0919 22:33:56.330404   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.330513   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:33:56.330548   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:33:56.330558   67622 certs.go:256] generating profile certs ...
	I0919 22:33:56.330645   67622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:33:56.330671   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648
	I0919 22:33:56.330686   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:33:56.589696   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 ...
	I0919 22:33:56.589724   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648: {Name:mk231e62d196ad4ac4ba36bf02a832f78de0258d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.589931   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648 ...
	I0919 22:33:56.589950   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648: {Name:mkf30412a461a8bacfd366640c7d4da1146a9418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.590056   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:33:56.590233   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:33:56.590374   67622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:33:56.590389   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:33:56.590402   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:33:56.590416   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:33:56.590429   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:33:56.590440   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:33:56.590450   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:33:56.590459   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:33:56.590476   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:33:56.590527   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:33:56.590552   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:33:56.590561   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:33:56.590584   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:33:56.590605   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:33:56.590626   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:33:56.590665   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:56.590692   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:33:56.590708   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:56.590721   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:33:56.590767   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:56.609877   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:56.698485   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:33:56.703209   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:33:56.716550   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:33:56.720735   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:33:56.733890   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:33:56.737616   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:33:56.750557   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:33:56.754948   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:33:56.770690   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:33:56.774864   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:33:56.787587   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:33:56.791154   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:33:56.804497   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:33:56.832411   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:33:56.858185   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:33:56.885311   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:33:56.911248   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:33:56.937552   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:33:56.963365   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:33:56.988811   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:33:57.014413   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:33:57.043525   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:33:57.069549   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:33:57.095993   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:33:57.115254   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:33:57.135395   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:33:57.155031   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:33:57.175220   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:33:57.194674   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:33:57.215027   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:33:57.235048   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:33:57.240702   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:33:57.251492   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.255754   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.255806   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.263388   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:33:57.274606   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:33:57.285494   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.289707   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.289758   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.296995   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:33:57.307702   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:33:57.318927   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.323131   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.323194   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.330266   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:33:57.340891   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:33:57.344726   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:33:57.344784   67622 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0919 22:33:57.344872   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:33:57.344897   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:33:57.344937   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:33:57.357462   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:57.357529   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:33:57.357582   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:33:57.367667   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:33:57.367722   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:33:57.377333   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:33:57.395969   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:33:57.418145   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:33:57.439308   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:33:57.443458   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:57.454967   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:57.522382   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:33:57.545690   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:57.545979   67622 start.go:317] joinCluster: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:57.546124   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:33:57.546185   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:57.565712   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:57.714381   67622 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:57.714452   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0rc9ka.7s4jxjfzbvya269x --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:34:14.891768   67622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0rc9ka.7s4jxjfzbvya269x --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (17.177290621s)
	I0919 22:34:14.891806   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:34:15.112649   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158-m02 minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=false
	I0919 22:34:15.189152   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-984158-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:34:15.268843   67622 start.go:319] duration metric: took 17.722860685s to joinCluster
	I0919 22:34:15.268921   67622 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:15.269212   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:15.270715   67622 out.go:179] * Verifying Kubernetes components...
	I0919 22:34:15.272193   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:15.373529   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:15.387143   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:34:15.387217   67622 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:34:15.387440   67622 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m02" to be "Ready" ...
	W0919 22:34:17.391040   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:19.391218   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:21.391885   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:23.891865   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:25.892208   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	I0919 22:34:28.391466   67622 node_ready.go:49] node "ha-984158-m02" is "Ready"
	I0919 22:34:28.391502   67622 node_ready.go:38] duration metric: took 13.004045549s for node "ha-984158-m02" to be "Ready" ...
	I0919 22:34:28.391521   67622 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:34:28.391578   67622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:34:28.403875   67622 api_server.go:72] duration metric: took 13.134915716s to wait for apiserver process to appear ...
	I0919 22:34:28.403907   67622 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:34:28.403928   67622 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:34:28.409570   67622 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:34:28.410599   67622 api_server.go:141] control plane version: v1.34.0
	I0919 22:34:28.410630   67622 api_server.go:131] duration metric: took 6.715556ms to wait for apiserver health ...
	I0919 22:34:28.410646   67622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:34:28.415646   67622 system_pods.go:59] 17 kube-system pods found
	I0919 22:34:28.415679   67622 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:34:28.415685   67622 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:34:28.415689   67622 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:34:28.415692   67622 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:34:28.415695   67622 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:34:28.415698   67622 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:34:28.415701   67622 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:34:28.415704   67622 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:34:28.415707   67622 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:34:28.415710   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:34:28.415713   67622 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:34:28.415715   67622 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:34:28.415718   67622 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:34:28.415721   67622 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:34:28.415723   67622 system_pods.go:61] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:34:28.415726   67622 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:34:28.415729   67622 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:34:28.415734   67622 system_pods.go:74] duration metric: took 5.082988ms to wait for pod list to return data ...
	I0919 22:34:28.415742   67622 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:34:28.418466   67622 default_sa.go:45] found service account: "default"
	I0919 22:34:28.418487   67622 default_sa.go:55] duration metric: took 2.73954ms for default service account to be created ...
	I0919 22:34:28.418498   67622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:34:28.422326   67622 system_pods.go:86] 17 kube-system pods found
	I0919 22:34:28.422351   67622 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:34:28.422357   67622 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:34:28.422361   67622 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:34:28.422365   67622 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:34:28.422368   67622 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:34:28.422376   67622 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:34:28.422379   67622 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:34:28.422383   67622 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:34:28.422386   67622 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:34:28.422390   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:34:28.422393   67622 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:34:28.422396   67622 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:34:28.422399   67622 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:34:28.422402   67622 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:34:28.422405   67622 system_pods.go:89] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:34:28.422408   67622 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:34:28.422415   67622 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:34:28.422421   67622 system_pods.go:126] duration metric: took 3.917676ms to wait for k8s-apps to be running ...
	I0919 22:34:28.422429   67622 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:34:28.422473   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:34:28.434607   67622 system_svc.go:56] duration metric: took 12.16943ms WaitForService to wait for kubelet
	I0919 22:34:28.434637   67622 kubeadm.go:578] duration metric: took 13.165683838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:34:28.434659   67622 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:34:28.437727   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:34:28.437756   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:34:28.437777   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:34:28.437784   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:34:28.437791   67622 node_conditions.go:105] duration metric: took 3.125214ms to run NodePressure ...
	I0919 22:34:28.437804   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:34:28.437837   67622 start.go:255] writing updated cluster config ...
	I0919 22:34:28.440033   67622 out.go:203] 
	I0919 22:34:28.441576   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:28.441673   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:28.443252   67622 out.go:179] * Starting "ha-984158-m03" control-plane node in "ha-984158" cluster
	I0919 22:34:28.444693   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:34:28.446038   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:28.447156   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:34:28.447185   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:28.447193   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:28.447285   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:28.447301   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:34:28.447448   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:28.469851   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:28.469873   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:28.469889   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:28.469913   67622 start.go:360] acquireMachinesLock for ha-984158-m03: {Name:mkf33267bff56ae1cde0b805408b7f6393558146 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:28.470008   67622 start.go:364] duration metric: took 81.331µs to acquireMachinesLock for "ha-984158-m03"
	I0919 22:34:28.470041   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:28.470170   67622 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:34:28.472544   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:34:28.472649   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:34:28.472677   67622 client.go:168] LocalClient.Create starting
	I0919 22:34:28.472742   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:34:28.472780   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:34:28.472799   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:34:28.472861   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:34:28.472888   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:34:28.472901   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:34:28.473209   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:28.490760   67622 network_create.go:77] Found existing network {name:ha-984158 subnet:0xc001af8060 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:34:28.490805   67622 kic.go:121] calculated static IP "192.168.49.4" for the "ha-984158-m03" container
	I0919 22:34:28.490880   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:34:28.509896   67622 cli_runner.go:164] Run: docker volume create ha-984158-m03 --label name.minikube.sigs.k8s.io=ha-984158-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:34:28.528837   67622 oci.go:103] Successfully created a docker volume ha-984158-m03
	I0919 22:34:28.528911   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m03 --entrypoint /usr/bin/test -v ha-984158-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:34:28.927062   67622 oci.go:107] Successfully prepared a docker volume ha-984158-m03
	I0919 22:34:28.927168   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:34:28.927199   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:34:28.927268   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:34:33.212737   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.285428249s)
	I0919 22:34:33.212770   67622 kic.go:203] duration metric: took 4.285569649s to extract preloaded images to volume ...
	W0919 22:34:33.212842   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:34:33.212868   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:34:33.212907   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:34:33.271794   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158-m03 --name ha-984158-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158-m03 --network ha-984158 --ip 192.168.49.4 --volume ha-984158-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:34:33.577096   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Running}}
	I0919 22:34:33.595112   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:33.615056   67622 cli_runner.go:164] Run: docker exec ha-984158-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:34:33.665241   67622 oci.go:144] the created container "ha-984158-m03" has a running status.
	I0919 22:34:33.665277   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa...
	I0919 22:34:34.167881   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:34:34.167925   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:34:34.195311   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:34.214983   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:34:34.215010   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:34:34.269287   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:34.290822   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:34.290917   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.310406   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.310629   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.310645   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:34.449392   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:34:34.449418   67622 ubuntu.go:182] provisioning hostname "ha-984158-m03"
	I0919 22:34:34.449477   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.470431   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.470643   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.470659   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m03 && echo "ha-984158-m03" | sudo tee /etc/hostname
	I0919 22:34:34.622394   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:34:34.622486   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.641997   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.642244   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.642262   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:34.780134   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:34.780169   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:34:34.780191   67622 ubuntu.go:190] setting up certificates
	I0919 22:34:34.780205   67622 provision.go:84] configureAuth start
	I0919 22:34:34.780271   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:34.799584   67622 provision.go:143] copyHostCerts
	I0919 22:34:34.799658   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:34:34.799692   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:34:34.799701   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:34:34.799769   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:34:34.799851   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:34:34.799870   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:34:34.799877   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:34:34.799904   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:34:34.799966   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:34:34.799983   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:34:34.799989   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:34:34.800012   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:34:34.800115   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m03 san=[127.0.0.1 192.168.49.4 ha-984158-m03 localhost minikube]
	I0919 22:34:34.944518   67622 provision.go:177] copyRemoteCerts
	I0919 22:34:34.944575   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:34.944606   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.963408   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.062939   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:35.063013   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:35.095527   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:35.095582   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:35.122809   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:35.122880   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:34:35.150023   67622 provision.go:87] duration metric: took 369.804514ms to configureAuth
	I0919 22:34:35.150056   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:35.150311   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:35.150452   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.170186   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:35.170414   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:35.170546   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:34:35.424872   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:34:35.424903   67622 machine.go:96] duration metric: took 1.1340482s to provisionDockerMachine
	I0919 22:34:35.424913   67622 client.go:171] duration metric: took 6.952229218s to LocalClient.Create
	I0919 22:34:35.424932   67622 start.go:167] duration metric: took 6.95228363s to libmachine.API.Create "ha-984158"
	I0919 22:34:35.424941   67622 start.go:293] postStartSetup for "ha-984158-m03" (driver="docker")
	I0919 22:34:35.424950   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:35.425005   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:35.425044   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.443122   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.542973   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:35.547045   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:35.547098   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:35.547140   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:35.547149   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:35.547164   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:34:35.547243   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:34:35.547346   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:34:35.547359   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:34:35.547461   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:35.557222   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:34:35.587487   67622 start.go:296] duration metric: took 162.532916ms for postStartSetup
	I0919 22:34:35.587898   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:35.605883   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:35.606188   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:35.606230   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.625506   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.719327   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:35.724945   67622 start.go:128] duration metric: took 7.25475977s to createHost
	I0919 22:34:35.724975   67622 start.go:83] releasing machines lock for "ha-984158-m03", held for 7.25495293s
	I0919 22:34:35.725066   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:35.746436   67622 out.go:179] * Found network options:
	I0919 22:34:35.748613   67622 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:34:35.750204   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750230   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750252   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750261   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:34:35.750333   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:34:35.750367   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.750414   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:35.750481   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.770785   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.771520   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:36.012617   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:36.017809   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:36.041480   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:36.041572   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:36.074662   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:34:36.074688   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:34:36.074719   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:36.074766   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:36.093544   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:36.107751   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:34:36.107801   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:34:36.123972   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:34:36.140690   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:34:36.213915   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:34:36.293890   67622 docker.go:234] disabling docker service ...
	I0919 22:34:36.293970   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:34:36.315495   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:34:36.329394   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:34:36.401603   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:34:36.566519   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:34:36.580168   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:36.598521   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:34:36.598580   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.612994   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:34:36.613052   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.625369   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.636513   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.647884   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:36.658467   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.670077   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.688463   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.700347   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:36.710192   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:36.722230   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.786818   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:34:36.889165   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:34:36.889244   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:34:36.893369   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:34:36.893434   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:34:36.897483   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:34:36.935462   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:34:36.935558   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:34:36.971682   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:34:37.011225   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:34:37.012939   67622 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:34:37.014619   67622 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:34:37.016609   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:37.035904   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:34:37.040209   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:37.053278   67622 mustload.go:65] Loading cluster: ha-984158
	I0919 22:34:37.053547   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:37.053803   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:34:37.073847   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:34:37.074139   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.4
	I0919 22:34:37.074157   67622 certs.go:194] generating shared ca certs ...
	I0919 22:34:37.074173   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.074282   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:34:37.074329   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:34:37.074340   67622 certs.go:256] generating profile certs ...
	I0919 22:34:37.074417   67622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:34:37.074441   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7
	I0919 22:34:37.074452   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:34:37.137117   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 ...
	I0919 22:34:37.137145   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7: {Name:mk19194d581061c0301a7ebaafcb4f75dd6f88da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.137332   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7 ...
	I0919 22:34:37.137346   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7: {Name:mkdc03dbd8fb2d6fc0a8ac2bb45b7aa14987fe74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.137418   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:34:37.137557   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:34:37.137679   67622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:34:37.137694   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:34:37.137706   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:34:37.137719   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:34:37.137732   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:34:37.137744   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:34:37.137756   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:34:37.137768   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:34:37.137780   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:34:37.137836   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:34:37.137865   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:34:37.137875   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:34:37.137895   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:34:37.137918   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:34:37.137950   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:34:37.137989   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:34:37.138014   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.138027   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.138042   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.138089   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:34:37.156562   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:34:37.245522   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:34:37.249874   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:34:37.263553   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:34:37.267840   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:34:37.282009   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:34:37.286008   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:34:37.299365   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:34:37.303011   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:34:37.316000   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:34:37.319968   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:34:37.335075   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:34:37.339209   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:34:37.352485   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:34:37.379736   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:34:37.405614   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:34:37.430819   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:34:37.457286   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:34:37.485582   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:34:37.511990   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:34:37.539620   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:34:37.566336   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:34:37.597966   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:34:37.624934   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:34:37.652281   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:34:37.672835   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:34:37.693826   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:34:37.712995   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:34:37.735150   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:34:37.755380   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:34:37.775695   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:34:37.796705   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:34:37.802715   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:34:37.814531   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.819194   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.819264   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.826904   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:34:37.838758   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:34:37.849465   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.853251   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.853305   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.860596   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:34:37.872602   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:34:37.885280   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.889622   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.889680   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.896943   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:34:37.908337   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:34:37.912368   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:34:37.912422   67622 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0919 22:34:37.912521   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:34:37.912549   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:34:37.912589   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:34:37.927225   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:37.927295   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:34:37.927349   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:34:37.937175   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:34:37.937241   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:34:37.946525   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:34:37.966151   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:34:37.991832   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:34:38.014409   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:34:38.018813   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:38.034487   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:38.100010   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:38.123308   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:34:38.123594   67622 start.go:317] joinCluster: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:38.123717   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:34:38.123769   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:34:38.144625   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:34:38.293340   67622 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:38.293387   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xvegph.tfd7m7k591l3snif --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:34:51.872651   67622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xvegph.tfd7m7k591l3snif --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (13.579238089s)
	I0919 22:34:51.872690   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:34:52.127072   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158-m03 minikube.k8s.io/updated_at=2025_09_19T22_34_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=false
	I0919 22:34:52.206869   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-984158-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:34:52.293044   67622 start.go:319] duration metric: took 14.169442875s to joinCluster
	I0919 22:34:52.293202   67622 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:52.293464   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:52.295014   67622 out.go:179] * Verifying Kubernetes components...
	I0919 22:34:52.296471   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:52.405642   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:52.419776   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:34:52.419840   67622 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:34:52.420054   67622 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m03" to be "Ready" ...
	W0919 22:34:54.424074   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:34:56.924240   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:34:58.925198   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:35:01.425329   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:35:03.923474   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	I0919 22:35:05.424225   67622 node_ready.go:49] node "ha-984158-m03" is "Ready"
	I0919 22:35:05.424253   67622 node_ready.go:38] duration metric: took 13.004161929s for node "ha-984158-m03" to be "Ready" ...
	I0919 22:35:05.424266   67622 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:35:05.424326   67622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:05.438342   67622 api_server.go:72] duration metric: took 13.14509411s to wait for apiserver process to appear ...
	I0919 22:35:05.438367   67622 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:35:05.438390   67622 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:35:05.442575   67622 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:35:05.443547   67622 api_server.go:141] control plane version: v1.34.0
	I0919 22:35:05.443573   67622 api_server.go:131] duration metric: took 5.19876ms to wait for apiserver health ...
	I0919 22:35:05.443582   67622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:35:05.452030   67622 system_pods.go:59] 24 kube-system pods found
	I0919 22:35:05.452062   67622 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:35:05.452067   67622 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:35:05.452073   67622 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:35:05.452079   67622 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:35:05.452084   67622 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:35:05.452089   67622 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:35:05.452094   67622 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:35:05.452129   67622 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:35:05.452136   67622 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:35:05.452141   67622 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:35:05.452146   67622 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:35:05.452151   67622 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:35:05.452156   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:35:05.452161   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:35:05.452165   67622 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:35:05.452170   67622 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:35:05.452174   67622 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:35:05.452179   67622 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:35:05.452184   67622 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:35:05.452188   67622 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:35:05.452193   67622 system_pods.go:61] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:35:05.452199   67622 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:35:05.452205   67622 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:35:05.452208   67622 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:35:05.452217   67622 system_pods.go:74] duration metric: took 8.62798ms to wait for pod list to return data ...
	I0919 22:35:05.452227   67622 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:35:05.455571   67622 default_sa.go:45] found service account: "default"
	I0919 22:35:05.455594   67622 default_sa.go:55] duration metric: took 3.361804ms for default service account to be created ...
	I0919 22:35:05.455603   67622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:35:05.460748   67622 system_pods.go:86] 24 kube-system pods found
	I0919 22:35:05.460777   67622 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:35:05.460783   67622 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:35:05.460787   67622 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:35:05.460790   67622 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:35:05.460793   67622 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:35:05.460798   67622 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:35:05.460801   67622 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:35:05.460803   67622 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:35:05.460806   67622 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:35:05.460809   67622 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:35:05.460812   67622 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:35:05.460815   67622 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:35:05.460818   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:35:05.460821   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:35:05.460826   67622 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:35:05.460829   67622 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:35:05.460832   67622 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:35:05.460835   67622 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:35:05.460838   67622 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:35:05.460841   67622 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:35:05.460844   67622 system_pods.go:89] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:35:05.460847   67622 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:35:05.460850   67622 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:35:05.460853   67622 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:35:05.460859   67622 system_pods.go:126] duration metric: took 5.251911ms to wait for k8s-apps to be running ...
	I0919 22:35:05.460866   67622 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:35:05.460906   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:35:05.475728   67622 system_svc.go:56] duration metric: took 14.850569ms WaitForService to wait for kubelet
	I0919 22:35:05.475767   67622 kubeadm.go:578] duration metric: took 13.182524274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:35:05.475791   67622 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:35:05.479992   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480016   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480028   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480032   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480035   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480038   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480042   67622 node_conditions.go:105] duration metric: took 4.246099ms to run NodePressure ...
	I0919 22:35:05.480052   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:35:05.480076   67622 start.go:255] writing updated cluster config ...
	I0919 22:35:05.480391   67622 ssh_runner.go:195] Run: rm -f paused
	I0919 22:35:05.484443   67622 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:35:05.484864   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:35:05.488632   67622 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gnbx" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.494158   67622 pod_ready.go:94] pod "coredns-66bc5c9577-5gnbx" is "Ready"
	I0919 22:35:05.494184   67622 pod_ready.go:86] duration metric: took 5.519921ms for pod "coredns-66bc5c9577-5gnbx" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.494194   67622 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ltjmz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.498979   67622 pod_ready.go:94] pod "coredns-66bc5c9577-ltjmz" is "Ready"
	I0919 22:35:05.499001   67622 pod_ready.go:86] duration metric: took 4.801852ms for pod "coredns-66bc5c9577-ltjmz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.501488   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.506605   67622 pod_ready.go:94] pod "etcd-ha-984158" is "Ready"
	I0919 22:35:05.506631   67622 pod_ready.go:86] duration metric: took 5.121241ms for pod "etcd-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.506643   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.511687   67622 pod_ready.go:94] pod "etcd-ha-984158-m02" is "Ready"
	I0919 22:35:05.511711   67622 pod_ready.go:86] duration metric: took 5.063338ms for pod "etcd-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.511721   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.686203   67622 request.go:683] "Waited before sending request" delay="174.390617ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-984158-m03"
	I0919 22:35:05.886318   67622 request.go:683] "Waited before sending request" delay="196.323175ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:05.889520   67622 pod_ready.go:94] pod "etcd-ha-984158-m03" is "Ready"
	I0919 22:35:05.889544   67622 pod_ready.go:86] duration metric: took 377.817661ms for pod "etcd-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.086145   67622 request.go:683] "Waited before sending request" delay="196.407438ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:35:06.090017   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.285426   67622 request.go:683] "Waited before sending request" delay="195.307128ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158"
	I0919 22:35:06.486234   67622 request.go:683] "Waited before sending request" delay="197.363102ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:06.489211   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158" is "Ready"
	I0919 22:35:06.489239   67622 pod_ready.go:86] duration metric: took 399.19471ms for pod "kube-apiserver-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.489249   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.685697   67622 request.go:683] "Waited before sending request" delay="196.373047ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158-m02"
	I0919 22:35:06.885918   67622 request.go:683] "Waited before sending request" delay="197.214097ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:06.888940   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158-m02" is "Ready"
	I0919 22:35:06.888966   67622 pod_ready.go:86] duration metric: took 399.709223ms for pod "kube-apiserver-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.888977   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.086320   67622 request.go:683] "Waited before sending request" delay="197.234187ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158-m03"
	I0919 22:35:07.286155   67622 request.go:683] "Waited before sending request" delay="196.391562ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:07.289116   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158-m03" is "Ready"
	I0919 22:35:07.289145   67622 pod_ready.go:86] duration metric: took 400.160627ms for pod "kube-apiserver-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.485647   67622 request.go:683] "Waited before sending request" delay="196.369215ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0919 22:35:07.489356   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.685801   67622 request.go:683] "Waited before sending request" delay="196.331241ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158"
	I0919 22:35:07.886175   67622 request.go:683] "Waited before sending request" delay="197.36953ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:07.889268   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158" is "Ready"
	I0919 22:35:07.889292   67622 pod_ready.go:86] duration metric: took 399.911799ms for pod "kube-controller-manager-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.889300   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.085780   67622 request.go:683] "Waited before sending request" delay="196.397628ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158-m02"
	I0919 22:35:08.286293   67622 request.go:683] "Waited before sending request" delay="197.157746ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:08.289542   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158-m02" is "Ready"
	I0919 22:35:08.289565   67622 pod_ready.go:86] duration metric: took 400.260559ms for pod "kube-controller-manager-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.289585   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.486054   67622 request.go:683] "Waited before sending request" delay="196.383406ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158-m03"
	I0919 22:35:08.685765   67622 request.go:683] "Waited before sending request" delay="196.365381ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:08.688911   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158-m03" is "Ready"
	I0919 22:35:08.688939   67622 pod_ready.go:86] duration metric: took 399.348524ms for pod "kube-controller-manager-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.885240   67622 request.go:683] "Waited before sending request" delay="196.197284ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:35:08.888653   67622 pod_ready.go:83] waiting for pod "kube-proxy-hdxxn" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.086194   67622 request.go:683] "Waited before sending request" delay="197.430633ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hdxxn"
	I0919 22:35:09.285936   67622 request.go:683] "Waited before sending request" delay="196.399441ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:09.289309   67622 pod_ready.go:94] pod "kube-proxy-hdxxn" is "Ready"
	I0919 22:35:09.289344   67622 pod_ready.go:86] duration metric: took 400.666867ms for pod "kube-proxy-hdxxn" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.289356   67622 pod_ready.go:83] waiting for pod "kube-proxy-k2drm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.485857   67622 request.go:683] "Waited before sending request" delay="196.368869ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k2drm"
	I0919 22:35:09.685224   67622 request.go:683] "Waited before sending request" delay="196.312304ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:09.688202   67622 pod_ready.go:94] pod "kube-proxy-k2drm" is "Ready"
	I0919 22:35:09.688225   67622 pod_ready.go:86] duration metric: took 398.86315ms for pod "kube-proxy-k2drm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.688232   67622 pod_ready.go:83] waiting for pod "kube-proxy-plrn2" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.885674   67622 request.go:683] "Waited before sending request" delay="197.37394ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-plrn2"
	I0919 22:35:10.085404   67622 request.go:683] "Waited before sending request" delay="196.238234ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:10.088413   67622 pod_ready.go:94] pod "kube-proxy-plrn2" is "Ready"
	I0919 22:35:10.088435   67622 pod_ready.go:86] duration metric: took 400.198021ms for pod "kube-proxy-plrn2" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.285955   67622 request.go:683] "Waited before sending request" delay="197.399738ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0919 22:35:10.289773   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.486274   67622 request.go:683] "Waited before sending request" delay="196.397415ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158"
	I0919 22:35:10.685865   67622 request.go:683] "Waited before sending request" delay="196.354476ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:10.688789   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158" is "Ready"
	I0919 22:35:10.688812   67622 pod_ready.go:86] duration metric: took 399.015441ms for pod "kube-scheduler-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.688821   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.886266   67622 request.go:683] "Waited before sending request" delay="197.365068ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158-m02"
	I0919 22:35:11.085685   67622 request.go:683] "Waited before sending request" delay="196.401015ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:11.088847   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158-m02" is "Ready"
	I0919 22:35:11.088884   67622 pod_ready.go:86] duration metric: took 400.056175ms for pod "kube-scheduler-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.088895   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.285309   67622 request.go:683] "Waited before sending request" delay="196.306548ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158-m03"
	I0919 22:35:11.485951   67622 request.go:683] "Waited before sending request" delay="197.396443ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:11.489000   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158-m03" is "Ready"
	I0919 22:35:11.489026   67622 pod_ready.go:86] duration metric: took 400.124566ms for pod "kube-scheduler-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.489036   67622 pod_ready.go:40] duration metric: took 6.004562578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:35:11.533521   67622 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:35:11.535265   67622 out.go:179] * Done! kubectl is now configured to use "ha-984158" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 22:33:59 ha-984158 crio[940]: time="2025-09-19 22:33:59.550284463Z" level=info msg="Starting container: ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a" id=e0a3358c-8796-408f-934f-d6cba020a690 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:33:59 ha-984158 crio[940]: time="2025-09-19 22:33:59.559054866Z" level=info msg="Started container" PID=2323 containerID=ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a description=kube-system/coredns-66bc5c9577-5gnbx/coredns id=e0a3358c-8796-408f-934f-d6cba020a690 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a67546437e6cd1431d56127b35c686ec4fbef541821d81e817187eac2eac44ae
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.844458340Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-rnjl7/POD" id=d0657219-f572-4248-9235-8842218cfa0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.844519430Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.863307191Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-rnjl7 Namespace:default ID:310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 UID:68cd1643-e7c7-480f-af91-8f2f4eafb766 NetNS:/var/run/netns/06be5280-8181-487d-a6d1-f625eae461d3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.863361143Z" level=info msg="Adding pod default_busybox-7b57f96db7-rnjl7 to CNI network \"kindnet\" (type=ptp)"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.877409166Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-rnjl7 Namespace:default ID:310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 UID:68cd1643-e7c7-480f-af91-8f2f4eafb766 NetNS:/var/run/netns/06be5280-8181-487d-a6d1-f625eae461d3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.877580199Z" level=info msg="Checking pod default_busybox-7b57f96db7-rnjl7 for CNI network kindnet (type=ptp)"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.878483692Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.879359170Z" level=info msg="Ran pod sandbox 310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 with infra container: default/busybox-7b57f96db7-rnjl7/POD" id=d0657219-f572-4248-9235-8842218cfa0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.880607012Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1735f4c5-1314-4a40-8ba8-c3ad07521ed5 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.880856313Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=1735f4c5-1314-4a40-8ba8-c3ad07521ed5 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.881636849Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=7ea2e14f-0929-48b6-8660-f50891d76427 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.882840066Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:35:13 ha-984158 crio[940]: time="2025-09-19 22:35:13.826935593Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.299818076Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=7ea2e14f-0929-48b6-8660-f50891d76427 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.300497300Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=93a0214d-e907-4422-9d10-19ea7fc4e56f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.301041675Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=93a0214d-e907-4422-9d10-19ea7fc4e56f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.301798545Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=0a8490eb-33d4-479b-9676-b4224390f69a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.302421301Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0a8490eb-33d4-479b-9676-b4224390f69a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.305168065Z" level=info msg="Creating container: default/busybox-7b57f96db7-rnjl7/busybox" id=3cab5b69-2469-4018-a242-e29452d9df66 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.305267569Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.380968697Z" level=info msg="Created container 9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e: default/busybox-7b57f96db7-rnjl7/busybox" id=3cab5b69-2469-4018-a242-e29452d9df66 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.381641384Z" level=info msg="Starting container: 9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e" id=796c6084-24c1-4536-af4f-844053cc1347 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.388597470Z" level=info msg="Started container" PID=2560 containerID=9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e description=default/busybox-7b57f96db7-rnjl7/busybox id=796c6084-24c1-4536-af4f-844053cc1347 name=/runtime.v1.RuntimeService/StartContainer sandboxID=310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9169b9b095a98       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   50 seconds ago      Running             busybox                   0                   310dd81aa6739       busybox-7b57f96db7-rnjl7
	ea03ecb87a050       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago       Running             coredns                   0                   a67546437e6cd       coredns-66bc5c9577-5gnbx
	d9aec8cde801c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       0                   f2f4dad3060cd       storage-provisioner
	7df7251c31862       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago       Running             coredns                   0                   549805b340720       coredns-66bc5c9577-ltjmz
	66e8ff6b4b2da       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      2 minutes ago       Running             kindnet-cni               0                   ca0bb4eb3a856       kindnet-rd882
	c90c0cf2d2e8d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      2 minutes ago       Running             kube-proxy                0                   6de94aa7ba9e1       kube-proxy-hdxxn
	6b6a81f4f6b23       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     2 minutes ago       Running             kube-vip                  0                   fba7b712cd4d4       kube-vip-ha-984158
	ccf53f9534beb       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      2 minutes ago       Running             kube-controller-manager   0                   15b128d3c6aed       kube-controller-manager-ha-984158
	01cd32d6daeeb       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      2 minutes ago       Running             kube-scheduler            0                   d854ebb188beb       kube-scheduler-ha-984158
	fda65fdd5e2b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      2 minutes ago       Running             etcd                      0                   9e61b75f9a67d       etcd-ha-984158
	8ed4a5888320b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      2 minutes ago       Running             kube-apiserver            0                   f7a2c4489feba       kube-apiserver-ha-984158
	
	
	==> coredns [7df7251c318624785e44160ab98a256321ca02663ac3f38b31058625169e65cf] <==
	[INFO] 10.244.1.2:34043 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.006963816s
	[INFO] 10.244.1.2:38425 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137951s
	[INFO] 10.244.2.2:51391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001353s
	[INFO] 10.244.2.2:50788 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010898214s
	[INFO] 10.244.2.2:57984 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165964s
	[INFO] 10.244.2.2:46802 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00010628s
	[INFO] 10.244.2.2:56859 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133945s
	[INFO] 10.244.0.4:44778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139187s
	[INFO] 10.244.0.4:52371 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149879s
	[INFO] 10.244.0.4:44391 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012178s
	[INFO] 10.244.0.4:42322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090724s
	[INFO] 10.244.1.2:47486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152861s
	[INFO] 10.244.1.2:33837 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197948s
	[INFO] 10.244.2.2:57569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187028s
	[INFO] 10.244.2.2:49299 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000201838s
	[INFO] 10.244.2.2:56021 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115909s
	[INFO] 10.244.0.4:58940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136946s
	[INFO] 10.244.0.4:36648 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142402s
	[INFO] 10.244.1.2:54958 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137478s
	[INFO] 10.244.1.2:49367 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111679s
	[INFO] 10.244.2.2:37477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176669s
	[INFO] 10.244.2.2:37006 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082361s
	[INFO] 10.244.0.4:52297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131909s
	[INFO] 10.244.0.4:59935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000069811s
	[INFO] 10.244.0.4:50031 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000124505s
	
	
	==> coredns [ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a] <==
	[INFO] 10.244.2.2:33714 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159773s
	[INFO] 10.244.2.2:40292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00009881s
	[INFO] 10.244.2.2:39630 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000811472s
	[INFO] 10.244.0.4:43002 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000112134s
	[INFO] 10.244.0.4:40782 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.000094347s
	[INFO] 10.244.1.2:36510 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033427373s
	[INFO] 10.244.1.2:41816 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158466s
	[INFO] 10.244.1.2:43260 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193529s
	[INFO] 10.244.2.2:48795 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161887s
	[INFO] 10.244.2.2:46683 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133363s
	[INFO] 10.244.2.2:56162 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135264s
	[INFO] 10.244.0.4:60293 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000085933s
	[INFO] 10.244.0.4:50296 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010728706s
	[INFO] 10.244.0.4:42098 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170789s
	[INFO] 10.244.0.4:50435 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154329s
	[INFO] 10.244.1.2:49298 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184582s
	[INFO] 10.244.1.2:58606 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110603s
	[INFO] 10.244.2.2:33122 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186581s
	[INFO] 10.244.0.4:51847 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155018s
	[INFO] 10.244.0.4:49360 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091433s
	[INFO] 10.244.1.2:44523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150525s
	[INFO] 10.244.1.2:48087 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154066s
	[INFO] 10.244.2.2:47219 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124336s
	[INFO] 10.244.2.2:58889 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148273s
	[INFO] 10.244.0.4:47101 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088754s
	
	
	==> describe nodes <==
	Name:               ha-984158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:33:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:35:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-984158
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 39160f7d8b9f44c18aede41e4d267fbd
	  System UUID:                e5418393-d7bf-429a-8ff0-9daee26920dd
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rnjl7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 coredns-66bc5c9577-5gnbx             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m18s
	  kube-system                 coredns-66bc5c9577-ltjmz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m18s
	  kube-system                 etcd-ha-984158                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m21s
	  kube-system                 kindnet-rd882                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-ha-984158             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-ha-984158    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-hdxxn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-ha-984158             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m24s
	  kube-system                 kube-vip-ha-984158                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m17s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m29s (x8 over 2m29s)  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m29s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m29s (x8 over 2m29s)  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m29s (x8 over 2m29s)  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m21s                  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s                  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s                  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  NodeReady                2m6s                   kubelet          Node ha-984158 status is now: NodeReady
	  Normal  RegisteredNode           112s                   node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           71s                    node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	
	
	Name:               ha-984158-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:35:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-984158-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 d32b005f3b5146359774fcbe4364b90b
	  System UUID:                370c0cbf-a33c-464e-aad2-0ef3d76b4ebb
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8s7jn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 etcd-ha-984158-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         109s
	  kube-system                 kindnet-th979                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      111s
	  kube-system                 kube-apiserver-ha-984158-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-ha-984158-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-plrn2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-scheduler-ha-984158-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-vip-ha-984158-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        108s  kube-proxy       
	  Normal  RegisteredNode  110s  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode  107s  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode  71s   node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	
	
	Name:               ha-984158-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:36:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:35:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-984158-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 038f6eff3d614d78917c49afbf40a4e7
	  System UUID:                a60f86ef-6d86-4217-85ca-ad02416ddc34
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c7qf4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 etcd-ha-984158-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         72s
	  kube-system                 kindnet-269nt                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      74s
	  kube-system                 kube-apiserver-ha-984158-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-controller-manager-ha-984158-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-proxy-k2drm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         74s
	  kube-system                 kube-scheduler-ha-984158-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         72s
	  kube-system                 kube-vip-ha-984158-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         72s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        71s   kube-proxy       
	  Normal  RegisteredNode  72s   node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode  71s   node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode  69s   node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	
	
	==> dmesg <==
	[  +0.103037] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029723] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.096733] kauditd_printk_skb: 47 callbacks suppressed
	[Sep19 22:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.041768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.022949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023825] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	
	
	==> etcd [fda65fdd5e2b890fe6940cd0f6b5afae54775a44a1e30b23dc514a1ea4a5dd4c] <==
	{"level":"info","ts":"2025-09-19T22:34:42.874829Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:42.880780Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"e8495135083f8257","stream-type":"stream MsgApp v2"}
	{"level":"warn","ts":"2025-09-19T22:34:42.880910Z","caller":"rafthttp/stream.go:264","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:42.880949Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:42.904957Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:42.908392Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:43.233880Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(7185048267463743064 12593026477526642892 16737998778312655447)"}
	{"level":"info","ts":"2025-09-19T22:34:43.234252Z","caller":"membership/cluster.go:550","msg":"promote member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","promoted-member-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:43.234386Z","caller":"etcdserver/server.go:1752","msg":"applied a configuration change through raft","local-member-id":"aec36adc501070cc","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:34:51.604205Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:35:02.111263Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:35:12.622830Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:35:12.851680Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"e8495135083f8257","bytes":1479617,"size":"1.5 MB","took":"30.017342016s"}
	{"level":"info","ts":"2025-09-19T22:35:40.335511Z","caller":"traceutil/trace.go:172","msg":"trace[580727823] transaction","detail":"{read_only:false; response_revision:1018; number_of_response:1; }","duration":"128.447767ms","start":"2025-09-19T22:35:40.207051Z","end":"2025-09-19T22:35:40.335498Z","steps":["trace[580727823] 'process raft request'  (duration: 128.303588ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:35:40.335758Z","caller":"traceutil/trace.go:172","msg":"trace[1969207353] linearizableReadLoop","detail":"{readStateIndex:1194; appliedIndex:1195; }","duration":"117.354033ms","start":"2025-09-19T22:35:40.218388Z","end":"2025-09-19T22:35:40.335742Z","steps":["trace[1969207353] 'read index received'  (duration: 117.348211ms)","trace[1969207353] 'applied index is now lower than readState.Index'  (duration: 4.715µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:35:40.335880Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.473932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:35:40.335910Z","caller":"traceutil/trace.go:172","msg":"trace[12563226] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:1018; }","duration":"117.51944ms","start":"2025-09-19T22:35:40.218383Z","end":"2025-09-19T22:35:40.335902Z","steps":["trace[12563226] 'agreement among raft nodes before linearized reading'  (duration: 117.444854ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:35:41.265249Z","caller":"traceutil/trace.go:172","msg":"trace[1252869991] linearizableReadLoop","detail":"{readStateIndex:1199; appliedIndex:1199; }","duration":"121.843359ms","start":"2025-09-19T22:35:41.143386Z","end":"2025-09-19T22:35:41.265229Z","steps":["trace[1252869991] 'read index received'  (duration: 121.835594ms)","trace[1252869991] 'applied index is now lower than readState.Index'  (duration: 6.337µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:35:41.398137Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.71266ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:35:41.398198Z","caller":"traceutil/trace.go:172","msg":"trace[1812653205] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:1020; }","duration":"254.803848ms","start":"2025-09-19T22:35:41.143376Z","end":"2025-09-19T22:35:41.398180Z","steps":["trace[1812653205] 'agreement among raft nodes before linearized reading'  (duration: 121.941063ms)","trace[1812653205] 'range keys from in-memory index tree'  (duration: 132.739969ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:35:41.398804Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.156113ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6221891540473536501 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.3\" mod_revision:996 > success:<request_put:<key:\"/registry/masterleases/192.168.49.3\" value_size:65 lease:6221891540473536499 >> failure:<>>","response":"size:16"}
	{"level":"warn","ts":"2025-09-19T22:35:41.658165Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e8495135083f8257","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"21.83656ms"}
	{"level":"warn","ts":"2025-09-19T22:35:41.658213Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"63b66b54cc365658","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"21.890877ms"}
	{"level":"warn","ts":"2025-09-19T22:35:41.659958Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.463182ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:35:41.660011Z","caller":"traceutil/trace.go:172","msg":"trace[1201229941] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1022; }","duration":"114.533322ms","start":"2025-09-19T22:35:41.545465Z","end":"2025-09-19T22:35:41.659998Z","steps":["trace[1201229941] 'range keys from in-memory index tree'  (duration: 114.424434ms)"],"step_count":1}
	
	
	==> kernel <==
	 22:36:05 up  1:18,  0 users,  load average: 1.41, 0.70, 0.48
	Linux ha-984158 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [66e8ff6b4b2da8ea01c46a247aa4714a90f2ed1d2ba051443dc7790f7f9aa6d2] <==
	I0919 22:35:18.711554       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:35:28.716289       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:28.716329       1 main.go:301] handling current node
	I0919 22:35:28.716350       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:35:28.716364       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:35:28.716578       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:35:28.716595       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:35:38.711253       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:35:38.711317       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:35:38.711571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:38.711585       1 main.go:301] handling current node
	I0919 22:35:38.711598       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:35:38.711602       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:35:48.710009       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:48.710041       1 main.go:301] handling current node
	I0919 22:35:48.710057       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:35:48.710061       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:35:48.710325       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:35:48.710351       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:35:58.715188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:58.715226       1 main.go:301] handling current node
	I0919 22:35:58.715243       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:35:58.715250       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:35:58.715473       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:35:58.715492       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8ed4a5888320b17174d5fd3227517f4c634bc157381bb9771474bfa5169aab2f] <==
	I0919 22:33:44.098000       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0919 22:33:44.107869       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:33:45.993421       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:33:46.743338       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:33:46.796068       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:33:46.799874       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:34:55.461764       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:00.508368       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:35:16.679730       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50288: use of closed network connection
	E0919 22:35:16.855038       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50310: use of closed network connection
	E0919 22:35:17.030728       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50338: use of closed network connection
	E0919 22:35:17.243171       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50346: use of closed network connection
	E0919 22:35:17.421526       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50372: use of closed network connection
	E0919 22:35:17.591329       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50402: use of closed network connection
	E0919 22:35:17.761924       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50422: use of closed network connection
	E0919 22:35:17.931932       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50438: use of closed network connection
	E0919 22:35:18.091452       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50456: use of closed network connection
	E0919 22:35:18.368592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50480: use of closed network connection
	E0919 22:35:18.524781       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50484: use of closed network connection
	E0919 22:35:18.691736       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50510: use of closed network connection
	E0919 22:35:18.869219       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50534: use of closed network connection
	E0919 22:35:19.030842       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50552: use of closed network connection
	E0919 22:35:19.201169       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50566: use of closed network connection
	I0919 22:36:01.868494       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:02.874315       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [ccf53f9534beb8a8c8742cb5e71e0540bfd9bc439877b525756c21d5eef9b422] <==
	I0919 22:33:45.991296       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:33:45.991359       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:33:45.991661       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:33:45.992619       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:33:45.992661       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:33:45.992715       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:33:45.992824       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:33:45.992860       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 22:33:45.992945       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158"
	I0919 22:33:45.992988       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0919 22:33:45.994081       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0919 22:33:45.994164       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:33:45.997463       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:33:46.000645       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 22:33:46.007588       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 22:33:46.014824       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:33:46.019019       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:34:00.995932       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0919 22:34:13.994601       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-f5gnl failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-f5gnl\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:34:14.552916       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-984158-m02\" does not exist"
	I0919 22:34:14.582362       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-984158-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:34:15.998546       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m02"
	I0919 22:34:51.526332       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-984158-m03\" does not exist"
	I0919 22:34:51.541723       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-984158-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:34:56.108424       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m03"
	
	
	==> kube-proxy [c90c0cf2d2e8d28017db69b5b6570bb146918d86f62813e08b6cf30633aabf39] <==
	I0919 22:33:48.275684       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:33:48.343595       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:33:48.444904       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:33:48.444958       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:33:48.445144       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:33:48.471588       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:33:48.471666       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:33:48.477726       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:33:48.478178       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:33:48.478219       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:33:48.480033       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:33:48.480053       1 config.go:200] "Starting service config controller"
	I0919 22:33:48.480068       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:33:48.480085       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:33:48.482031       1 config.go:309] "Starting node config controller"
	I0919 22:33:48.482049       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:33:48.482057       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:33:48.480508       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:33:48.482857       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:33:48.580234       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:33:48.582666       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:33:48.583733       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [01cd32d6daeeb8f86625ec5d90712811aa7cc0b7dee503e21a57e8bd093892cc] <==
	E0919 22:33:39.908093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:33:39.911081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:33:39.988409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 22:33:40.028297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:33:40.063508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:33:40.098835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:33:40.219678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 22:33:40.224737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:33:40.235874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:33:40.301093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0919 22:33:42.406311       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:34:14.584511       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-plrn2\": pod kube-proxy-plrn2 is already assigned to node \"ha-984158-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-plrn2" node="ha-984158-m02"
	E0919 22:34:14.584664       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-plrn2\": pod kube-proxy-plrn2 is already assigned to node \"ha-984158-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-plrn2"
	E0919 22:34:51.565644       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-k2drm\": pod kube-proxy-k2drm is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-k2drm" node="ha-984158-m03"
	E0919 22:34:51.565863       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 040bf3f7-8d97-4799-b3a2-12b57eec38ef(kube-system/kube-proxy-k2drm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-k2drm"
	E0919 22:34:51.565922       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-k2drm\": pod kube-proxy-k2drm is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-k2drm"
	E0919 22:34:51.565851       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tqv25\": pod kube-proxy-tqv25 is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tqv25" node="ha-984158-m03"
	E0919 22:34:51.565999       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 6db503ca-eaf1-4ffc-8418-f778e65529c9(kube-system/kube-proxy-tqv25) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-tqv25"
	E0919 22:34:51.565619       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gtv88\": pod kindnet-gtv88 is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-gtv88" node="ha-984158-m03"
	E0919 22:34:51.566066       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 2040513e-991f-4c82-9b69-1e3fa299841a(kube-system/kindnet-gtv88) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-gtv88"
	E0919 22:34:51.568208       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tqv25\": pod kube-proxy-tqv25 is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-tqv25"
	I0919 22:34:51.568393       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tqv25" node="ha-984158-m03"
	I0919 22:34:51.568363       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-k2drm" node="ha-984158-m03"
	E0919 22:34:51.568334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gtv88\": pod kindnet-gtv88 is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kindnet-gtv88"
	I0919 22:34:51.574210       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gtv88" node="ha-984158-m03"
	
	
	==> kubelet <==
	Sep 19 22:34:13 ha-984158 kubelet[1691]: E0919 22:34:13.925352    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321253925085483  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:23 ha-984158 kubelet[1691]: E0919 22:34:23.926790    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321263926568823  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:23 ha-984158 kubelet[1691]: E0919 22:34:23.926836    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321263926568823  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:33 ha-984158 kubelet[1691]: E0919 22:34:33.928784    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321273928474652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:33 ha-984158 kubelet[1691]: E0919 22:34:33.928816    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321273928474652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:43 ha-984158 kubelet[1691]: E0919 22:34:43.930936    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321283930660810  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:43 ha-984158 kubelet[1691]: E0919 22:34:43.931007    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321283930660810  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:53 ha-984158 kubelet[1691]: E0919 22:34:53.932414    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321293932160714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:53 ha-984158 kubelet[1691]: E0919 22:34:53.932450    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321293932160714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:03 ha-984158 kubelet[1691]: E0919 22:35:03.934355    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321303934004965  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:03 ha-984158 kubelet[1691]: E0919 22:35:03.934407    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321303934004965  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:12 ha-984158 kubelet[1691]: I0919 22:35:12.604999    1691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984pg\" (UniqueName: \"kubernetes.io/projected/68cd1643-e7c7-480f-af91-8f2f4eafb766-kube-api-access-984pg\") pod \"busybox-7b57f96db7-rnjl7\" (UID: \"68cd1643-e7c7-480f-af91-8f2f4eafb766\") " pod="default/busybox-7b57f96db7-rnjl7"
	Sep 19 22:35:13 ha-984158 kubelet[1691]: E0919 22:35:13.935689    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321313935476454  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:13 ha-984158 kubelet[1691]: E0919 22:35:13.935726    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321313935476454  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:19 ha-984158 kubelet[1691]: E0919 22:35:19.030824    1691 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40998->127.0.0.1:37933: write tcp 127.0.0.1:40998->127.0.0.1:37933: write: broken pipe
	Sep 19 22:35:23 ha-984158 kubelet[1691]: E0919 22:35:23.937510    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321323937255941  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:23 ha-984158 kubelet[1691]: E0919 22:35:23.937554    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321323937255941  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:33 ha-984158 kubelet[1691]: E0919 22:35:33.938855    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321333938596677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:33 ha-984158 kubelet[1691]: E0919 22:35:33.938899    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321333938596677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:43 ha-984158 kubelet[1691]: E0919 22:35:43.940553    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321343940230113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:43 ha-984158 kubelet[1691]: E0919 22:35:43.940595    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321343940230113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:53 ha-984158 kubelet[1691]: E0919 22:35:53.942304    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321353941911906  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:53 ha-984158 kubelet[1691]: E0919 22:35:53.942351    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321353941911906  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:03 ha-984158 kubelet[1691]: E0919 22:36:03.943680    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321363943336068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:03 ha-984158 kubelet[1691]: E0919 22:36:03.943728    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321363943336068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-984158 -n ha-984158
helpers_test.go:269: (dbg) Run:  kubectl --context ha-984158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/CopyFile FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/CopyFile (15.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (16.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 node stop m02 --alsologtostderr -v 5: (13.621671076s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (548.796086ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-984158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:36:20.360682   87660 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:36:20.360812   87660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:20.360824   87660 out.go:374] Setting ErrFile to fd 2...
	I0919 22:36:20.360831   87660 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:20.361028   87660 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:36:20.361229   87660 out.go:368] Setting JSON to false
	I0919 22:36:20.361249   87660 mustload.go:65] Loading cluster: ha-984158
	I0919 22:36:20.361409   87660 notify.go:220] Checking for updates...
	I0919 22:36:20.361616   87660 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:36:20.361640   87660 status.go:174] checking status of ha-984158 ...
	I0919 22:36:20.362095   87660 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:36:20.385662   87660 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:36:20.385700   87660 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:20.385966   87660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:36:20.403465   87660 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:20.403714   87660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:20.403765   87660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:36:20.421830   87660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:36:20.515624   87660 ssh_runner.go:195] Run: systemctl --version
	I0919 22:36:20.520187   87660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:20.532011   87660 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:36:20.587167   87660 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:2 ContainersPaused:0 ContainersStopped:2 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-19 22:36:20.576464773 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:36:20.587735   87660 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:20.587765   87660 api_server.go:166] Checking apiserver status ...
	I0919 22:36:20.587806   87660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:20.599805   87660 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:36:20.610805   87660 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:20.610865   87660 ssh_runner.go:195] Run: ls
	I0919 22:36:20.615045   87660 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:20.621331   87660 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:20.621365   87660 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:36:20.621377   87660 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:20.621392   87660 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:36:20.621713   87660 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:36:20.642981   87660 status.go:371] ha-984158-m02 host status = "Stopped" (err=<nil>)
	I0919 22:36:20.643006   87660 status.go:384] host is not running, skipping remaining checks
	I0919 22:36:20.643087   87660 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:20.643152   87660 status.go:174] checking status of ha-984158-m03 ...
	I0919 22:36:20.643585   87660 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:36:20.663214   87660 status.go:371] ha-984158-m03 host status = "Running" (err=<nil>)
	I0919 22:36:20.663242   87660 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:20.663493   87660 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:36:20.685824   87660 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:20.686088   87660 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:20.686148   87660 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:36:20.707095   87660 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:36:20.800662   87660 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:20.813275   87660 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:20.813298   87660 api_server.go:166] Checking apiserver status ...
	I0919 22:36:20.813329   87660 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:20.824640   87660 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W0919 22:36:20.836032   87660 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:20.836088   87660 ssh_runner.go:195] Run: ls
	I0919 22:36:20.839803   87660 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:20.844263   87660 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:20.844285   87660 status.go:463] ha-984158-m03 apiserver status = Running (err=<nil>)
	I0919 22:36:20.844293   87660 status.go:176] ha-984158-m03 status: &{Name:ha-984158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:20.844307   87660 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:36:20.844576   87660 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:36:20.862580   87660 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:36:20.862600   87660 status.go:384] host is not running, skipping remaining checks
	I0919 22:36:20.862606   87660 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:380: status says not three hosts are running: args "out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5": ha-984158
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-984158-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-984158-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-984158-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
ha_test.go:383: status says not three kubelets are running: args "out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5": ha-984158
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-984158-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-984158-m03
type: Control Plane
host: Running
kubelet: Running
apiserver: Running
kubeconfig: Configured

                                                
                                                
ha-984158-m04
type: Worker
host: Stopped
kubelet: Stopped

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-984158
helpers_test.go:243: (dbg) docker inspect ha-984158:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	        "Created": "2025-09-19T22:33:24.996172492Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68186,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:33:25.030742493Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hosts",
	        "LogPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca-json.log",
	        "Name": "/ha-984158",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-984158:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-984158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	                "LowerDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-984158",
	                "Source": "/var/lib/docker/volumes/ha-984158/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-984158",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-984158",
	                "name.minikube.sigs.k8s.io": "ha-984158",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b35e3615d35b58bcec7825bb039821b1dfb6293e56fe1316d0ae491d5b3b0558",
	            "SandboxKey": "/var/run/docker/netns/b35e3615d35b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-984158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:4d:99:af:3d:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1b6c79ac61dbabfd8f1ce8959ab9a2616212ddaf4680b1bb2cc7b6f6005d0e",
	                    "EndpointID": "150c15de67a84040f10d82e99ed82c2230b34908474820017c5633e8a5513d79",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-984158",
	                        "0e7c4b5cff2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-984158 -n ha-984158
helpers_test.go:252: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 logs -n 25: (1.18599578s)
helpers_test.go:260: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m03.txt │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m03_ha-984158.txt                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158.txt                                                 │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp testdata/cp-test.txt ha-984158-m04:/home/docker/cp-test.txt                                                             │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m04.txt │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m04_ha-984158.txt                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158.txt                                                 │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ node    │ ha-984158 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:33:19
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:33:19.901060   67622 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:19.901185   67622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:19.901193   67622 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:19.901198   67622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:19.901448   67622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:33:19.902017   67622 out.go:368] Setting JSON to false
	I0919 22:33:19.903166   67622 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4550,"bootTime":1758316650,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:33:19.903283   67622 start.go:140] virtualization: kvm guest
	I0919 22:33:19.906578   67622 out.go:179] * [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:33:19.908489   67622 notify.go:220] Checking for updates...
	I0919 22:33:19.908508   67622 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:33:19.910361   67622 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:33:19.912958   67622 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:33:19.914823   67622 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:33:19.919772   67622 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:33:19.921444   67622 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:33:19.923242   67622 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:33:19.947549   67622 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:33:19.947649   67622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:20.004707   67622 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:33:19.994191177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:20.004832   67622 docker.go:318] overlay module found
	I0919 22:33:20.006907   67622 out.go:179] * Using the docker driver based on user configuration
	I0919 22:33:20.008195   67622 start.go:304] selected driver: docker
	I0919 22:33:20.008214   67622 start.go:918] validating driver "docker" against <nil>
	I0919 22:33:20.008227   67622 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:33:20.008818   67622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:20.067697   67622 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:33:20.055128215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:20.067871   67622 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:33:20.068167   67622 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:33:20.070129   67622 out.go:179] * Using Docker driver with root privileges
	I0919 22:33:20.071439   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:20.071513   67622 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:33:20.071523   67622 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:33:20.071600   67622 start.go:348] cluster config:
	{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:20.073188   67622 out.go:179] * Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	I0919 22:33:20.074628   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:33:20.076439   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:33:20.078066   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:20.078159   67622 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:33:20.078159   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:33:20.078174   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:33:20.078333   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:33:20.078348   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:33:20.078744   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:20.078777   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json: {Name:mk745b6092cc48756321ca371e559184d12db2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:20.100036   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:33:20.100059   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:33:20.100081   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:33:20.100133   67622 start.go:360] acquireMachinesLock for ha-984158: {Name:mkc72a6d4fef468a73a10e88f019b77c34dadd97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:33:20.100248   67622 start.go:364] duration metric: took 93.303µs to acquireMachinesLock for "ha-984158"
	I0919 22:33:20.100277   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:20.100380   67622 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:33:20.103382   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:33:20.103623   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:33:20.103675   67622 client.go:168] LocalClient.Create starting
	I0919 22:33:20.103751   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:33:20.103785   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:20.103799   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:20.103860   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:33:20.103880   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:20.103895   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:20.104259   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:33:20.122340   67622 cli_runner.go:211] docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:33:20.122418   67622 network_create.go:284] running [docker network inspect ha-984158] to gather additional debugging logs...
	I0919 22:33:20.122455   67622 cli_runner.go:164] Run: docker network inspect ha-984158
	W0919 22:33:20.139578   67622 cli_runner.go:211] docker network inspect ha-984158 returned with exit code 1
	I0919 22:33:20.139605   67622 network_create.go:287] error running [docker network inspect ha-984158]: docker network inspect ha-984158: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-984158 not found
	I0919 22:33:20.139623   67622 network_create.go:289] output of [docker network inspect ha-984158]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-984158 not found
	
	** /stderr **
	I0919 22:33:20.139738   67622 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:20.159001   67622 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b807f0}
	I0919 22:33:20.159067   67622 network_create.go:124] attempt to create docker network ha-984158 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:33:20.159151   67622 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-984158 ha-984158
	I0919 22:33:20.220465   67622 network_create.go:108] docker network ha-984158 192.168.49.0/24 created
	I0919 22:33:20.220505   67622 kic.go:121] calculated static IP "192.168.49.2" for the "ha-984158" container
	I0919 22:33:20.220576   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:33:20.238299   67622 cli_runner.go:164] Run: docker volume create ha-984158 --label name.minikube.sigs.k8s.io=ha-984158 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:33:20.257860   67622 oci.go:103] Successfully created a docker volume ha-984158
	I0919 22:33:20.258049   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158 --entrypoint /usr/bin/test -v ha-984158:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:33:20.650160   67622 oci.go:107] Successfully prepared a docker volume ha-984158
	I0919 22:33:20.650207   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:20.650234   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:33:20.650319   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:33:24.923696   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.273335756s)
	I0919 22:33:24.923745   67622 kic.go:203] duration metric: took 4.273508289s to extract preloaded images to volume ...
	W0919 22:33:24.923837   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:33:24.923868   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:33:24.923905   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:33:24.980440   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158 --name ha-984158 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158 --network ha-984158 --ip 192.168.49.2 --volume ha-984158:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:33:25.243904   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Running}}
	I0919 22:33:25.262964   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:25.282632   67622 cli_runner.go:164] Run: docker exec ha-984158 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:33:25.335702   67622 oci.go:144] the created container "ha-984158" has a running status.
	I0919 22:33:25.335743   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa...
	I0919 22:33:26.151425   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:33:26.151477   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:33:26.176792   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:26.194873   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:33:26.194911   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:33:26.242371   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:26.260832   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:33:26.260926   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.280776   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.281060   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.281074   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:33:26.419419   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:33:26.419451   67622 ubuntu.go:182] provisioning hostname "ha-984158"
	I0919 22:33:26.419523   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.438011   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.438316   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.438334   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158 && echo "ha-984158" | sudo tee /etc/hostname
	I0919 22:33:26.587806   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:33:26.587878   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.606861   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.607093   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.607134   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:33:26.743969   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:33:26.744008   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:33:26.744055   67622 ubuntu.go:190] setting up certificates
	I0919 22:33:26.744068   67622 provision.go:84] configureAuth start
	I0919 22:33:26.744152   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:26.765302   67622 provision.go:143] copyHostCerts
	I0919 22:33:26.765368   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:26.765405   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:33:26.765414   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:26.765489   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:33:26.765575   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:26.765596   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:33:26.765600   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:26.765626   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:33:26.765682   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:26.765696   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:33:26.765702   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:26.765725   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:33:26.765773   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158 san=[127.0.0.1 192.168.49.2 ha-984158 localhost minikube]
	I0919 22:33:27.052522   67622 provision.go:177] copyRemoteCerts
	I0919 22:33:27.052586   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:33:27.052619   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.077750   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.179645   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:33:27.179718   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:33:27.210288   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:33:27.210351   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:33:27.238586   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:33:27.238648   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:33:27.264405   67622 provision.go:87] duration metric: took 520.31998ms to configureAuth
	I0919 22:33:27.264432   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:33:27.264630   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:27.264744   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.284923   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:27.285168   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:27.285188   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:33:27.533206   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:33:27.533232   67622 machine.go:96] duration metric: took 1.272377771s to provisionDockerMachine
	I0919 22:33:27.533245   67622 client.go:171] duration metric: took 7.429561262s to LocalClient.Create
	I0919 22:33:27.533269   67622 start.go:167] duration metric: took 7.429646395s to libmachine.API.Create "ha-984158"
	I0919 22:33:27.533281   67622 start.go:293] postStartSetup for "ha-984158" (driver="docker")
	I0919 22:33:27.533292   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:33:27.533378   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:33:27.533430   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.551574   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.651298   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:33:27.655006   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:33:27.655037   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:33:27.655045   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:33:27.655051   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:33:27.655070   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:33:27.655147   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:33:27.655229   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:33:27.655238   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:33:27.655339   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:33:27.664695   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:27.695230   67622 start.go:296] duration metric: took 161.927495ms for postStartSetup
	I0919 22:33:27.695585   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:27.713847   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:27.714141   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:27.714182   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.735921   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.829368   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:33:27.833923   67622 start.go:128] duration metric: took 7.733528511s to createHost
	I0919 22:33:27.833953   67622 start.go:83] releasing machines lock for "ha-984158", held for 7.733693746s
	I0919 22:33:27.834022   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:27.851363   67622 ssh_runner.go:195] Run: cat /version.json
	I0919 22:33:27.851382   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:33:27.851422   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.851435   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.870773   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.871172   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:28.037834   67622 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:28.042707   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:33:28.184533   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:33:28.189494   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:28.213778   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:33:28.213869   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:28.245273   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:33:28.245311   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:33:28.245342   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:33:28.245409   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:33:28.260712   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:33:28.273221   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:33:28.273285   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:33:28.287690   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:33:28.303163   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:33:28.371756   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:33:28.449427   67622 docker.go:234] disabling docker service ...
	I0919 22:33:28.449499   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:33:28.467447   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:33:28.481298   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:33:28.558342   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:33:28.661953   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:33:28.675151   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:33:28.695465   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:33:28.695540   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.709844   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:33:28.709908   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.720817   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.731627   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.742506   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:33:28.753955   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.765830   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.784178   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.795285   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:33:28.804935   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:33:28.814326   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:28.918546   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:33:29.014541   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:33:29.014608   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:33:29.018746   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:33:29.018808   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:33:29.023643   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:33:29.059951   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:33:29.060029   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:29.098887   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:29.139500   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:33:29.141059   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:29.158455   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:33:29.162464   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:29.175140   67622 kubeadm.go:875] updating cluster {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Soc
ketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:33:29.175280   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:29.175333   67622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:33:29.248936   67622 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:33:29.248961   67622 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:33:29.249018   67622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:33:29.287448   67622 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:33:29.287472   67622 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:33:29.287479   67622 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:33:29.287577   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:33:29.287645   67622 ssh_runner.go:195] Run: crio config
	I0919 22:33:29.333242   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:29.333266   67622 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:33:29.333277   67622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:33:29.333307   67622 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-984158 NodeName:ha-984158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:33:29.333435   67622 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-984158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:33:29.333463   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:33:29.333506   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:33:29.346933   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:29.347143   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:33:29.347207   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:33:29.356691   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:33:29.356785   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:33:29.366595   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0919 22:33:29.386942   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:33:29.409639   67622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0919 22:33:29.428838   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:33:29.449681   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:33:29.453679   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:29.465645   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:29.534315   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:33:29.558739   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.2
	I0919 22:33:29.558767   67622 certs.go:194] generating shared ca certs ...
	I0919 22:33:29.558787   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:29.558925   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:33:29.558985   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:33:29.559000   67622 certs.go:256] generating profile certs ...
	I0919 22:33:29.559069   67622 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:33:29.559085   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt with IP's: []
	I0919 22:33:30.287530   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt ...
	I0919 22:33:30.287574   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt: {Name:mk4722cc3499628a90845973a8533bb2f9abaeaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.287824   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key ...
	I0919 22:33:30.287842   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key: {Name:mk95f513fb24356a441cd3443b0c241a35c61186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.287965   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f
	I0919 22:33:30.287986   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:33:30.489410   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f ...
	I0919 22:33:30.489443   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f: {Name:mk50e3acb42d56649151d2b237558cdb8ee1e1f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.489635   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f ...
	I0919 22:33:30.489654   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f: {Name:mke306934752782de0837143fc2872d74f6e5eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.489765   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:33:30.489897   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:33:30.489990   67622 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:33:30.490013   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt with IP's: []
	I0919 22:33:30.692692   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt ...
	I0919 22:33:30.692725   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt: {Name:mkec855f3fc5cc887af952272036f6b6db6c122d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.692913   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key ...
	I0919 22:33:30.692929   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key: {Name:mk41b934f9d330e25cbaab5814efeb52422665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.693033   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:33:30.693058   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:33:30.693082   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:33:30.693113   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:33:30.693131   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:33:30.693163   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:33:30.693182   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:33:30.693202   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:33:30.693280   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:33:30.693327   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:33:30.693343   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:33:30.693379   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:33:30.693413   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:33:30.693444   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:33:30.693498   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:30.693554   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:33:30.693575   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:30.693594   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:33:30.694169   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:33:30.721034   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:33:30.747256   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:33:30.773231   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:33:30.799758   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:33:30.825801   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:33:30.852404   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:33:30.879195   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:33:30.905339   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:33:30.934694   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:33:30.960677   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:33:30.987763   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:33:31.008052   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:33:31.014839   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:33:31.025609   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.029511   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.029570   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.036708   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:33:31.047387   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:33:31.058096   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.062519   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.062579   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.070083   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:33:31.080599   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:33:31.091228   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.095407   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.095480   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.102644   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:33:31.114044   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:33:31.118226   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:33:31.118374   67622 kubeadm.go:392] StartCluster: {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:31.118467   67622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:33:31.118521   67622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:33:31.155950   67622 cri.go:89] found id: ""
	I0919 22:33:31.156024   67622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:33:31.166037   67622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:33:31.175817   67622 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:33:31.175867   67622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:33:31.185690   67622 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:33:31.185707   67622 kubeadm.go:157] found existing configuration files:
	
	I0919 22:33:31.185748   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:33:31.195069   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:33:31.195184   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:33:31.204614   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:33:31.216208   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:33:31.216271   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:33:31.226344   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:33:31.239080   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:33:31.239168   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:33:31.248993   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:33:31.258113   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:33:31.258175   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:33:31.267147   67622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:33:31.307922   67622 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:33:31.308018   67622 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:33:31.323647   67622 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:33:31.323774   67622 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:33:31.323839   67622 kubeadm.go:310] OS: Linux
	I0919 22:33:31.323926   67622 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:33:31.324015   67622 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:33:31.324149   67622 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:33:31.324222   67622 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:33:31.324293   67622 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:33:31.324356   67622 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:33:31.324417   67622 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:33:31.324484   67622 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:33:31.377266   67622 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:33:31.377414   67622 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:33:31.377573   67622 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:33:31.384351   67622 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:33:31.386660   67622 out.go:252]   - Generating certificates and keys ...
	I0919 22:33:31.386732   67622 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:33:31.386811   67622 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:33:31.789403   67622 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:33:31.939575   67622 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:33:32.401218   67622 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:33:32.595052   67622 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:33:33.118331   67622 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:33:33.118543   67622 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-984158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:33:34.059417   67622 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:33:34.059600   67622 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-984158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:33:34.382200   67622 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:33:34.860984   67622 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:33:34.940846   67622 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:33:34.940919   67622 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:33:35.161325   67622 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:33:35.301598   67622 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:33:35.610006   67622 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:33:35.767736   67622 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:33:36.001912   67622 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:33:36.002376   67622 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:33:36.005697   67622 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:33:36.010843   67622 out.go:252]   - Booting up control plane ...
	I0919 22:33:36.010955   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:33:36.011044   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:33:36.011162   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:33:36.018352   67622 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:33:36.018463   67622 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:33:36.024835   67622 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:33:36.025002   67622 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:33:36.025072   67622 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:33:36.099408   67622 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:33:36.099593   67622 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:33:37.100521   67622 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001186505s
	I0919 22:33:37.103674   67622 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:33:37.103813   67622 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:33:37.103961   67622 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:33:37.104092   67622 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:33:38.781776   67622 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.678113429s
	I0919 22:33:39.011334   67622 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 1.907735584s
	I0919 22:33:43.273677   67622 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.17006372s
	I0919 22:33:43.285923   67622 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:33:43.298989   67622 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:33:43.310631   67622 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:33:43.310870   67622 kubeadm.go:310] [mark-control-plane] Marking the node ha-984158 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:33:43.319951   67622 kubeadm.go:310] [bootstrap-token] Using token: wc3lep.4w3ocubibd25hbwe
	I0919 22:33:43.321976   67622 out.go:252]   - Configuring RBAC rules ...
	I0919 22:33:43.322154   67622 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:33:43.325670   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:33:43.333517   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:33:43.338509   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:33:43.342046   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:33:43.345237   67622 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:33:43.680686   67622 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:33:44.099041   67622 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:33:44.680531   67622 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:33:44.681480   67622 kubeadm.go:310] 
	I0919 22:33:44.681572   67622 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:33:44.681591   67622 kubeadm.go:310] 
	I0919 22:33:44.681690   67622 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:33:44.681708   67622 kubeadm.go:310] 
	I0919 22:33:44.681761   67622 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:33:44.681854   67622 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:33:44.681910   67622 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:33:44.681916   67622 kubeadm.go:310] 
	I0919 22:33:44.681968   67622 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:33:44.681978   67622 kubeadm.go:310] 
	I0919 22:33:44.682015   67622 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:33:44.682021   67622 kubeadm.go:310] 
	I0919 22:33:44.682066   67622 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:33:44.682162   67622 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:33:44.682244   67622 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:33:44.682258   67622 kubeadm.go:310] 
	I0919 22:33:44.682378   67622 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:33:44.682497   67622 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:33:44.682510   67622 kubeadm.go:310] 
	I0919 22:33:44.682620   67622 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wc3lep.4w3ocubibd25hbwe \
	I0919 22:33:44.682733   67622 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 \
	I0919 22:33:44.682757   67622 kubeadm.go:310] 	--control-plane 
	I0919 22:33:44.682761   67622 kubeadm.go:310] 
	I0919 22:33:44.682837   67622 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:33:44.682844   67622 kubeadm.go:310] 
	I0919 22:33:44.682919   67622 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wc3lep.4w3ocubibd25hbwe \
	I0919 22:33:44.683036   67622 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 
	I0919 22:33:44.685970   67622 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:33:44.686071   67622 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:33:44.686097   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:44.686119   67622 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:33:44.688616   67622 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:33:44.690471   67622 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:33:44.695364   67622 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:33:44.695381   67622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:33:44.715791   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:33:44.939557   67622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:33:44.939639   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:44.939678   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158 minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=true
	I0919 22:33:45.023827   67622 ops.go:34] apiserver oom_adj: -16
	I0919 22:33:45.023957   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:45.524455   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:46.024018   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:46.524600   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.024332   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.524121   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.592879   67622 kubeadm.go:1105] duration metric: took 2.653303844s to wait for elevateKubeSystemPrivileges
	I0919 22:33:47.592920   67622 kubeadm.go:394] duration metric: took 16.47455539s to StartCluster
	I0919 22:33:47.592944   67622 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:47.593012   67622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:33:47.593661   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:47.593878   67622 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:47.593899   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:33:47.593915   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:33:47.593910   67622 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:33:47.593968   67622 addons.go:69] Setting storage-provisioner=true in profile "ha-984158"
	I0919 22:33:47.593987   67622 addons.go:238] Setting addon storage-provisioner=true in "ha-984158"
	I0919 22:33:47.594014   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:47.594020   67622 addons.go:69] Setting default-storageclass=true in profile "ha-984158"
	I0919 22:33:47.594052   67622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-984158"
	I0919 22:33:47.594180   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:47.594397   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.594490   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.616114   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:33:47.616790   67622 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:33:47.616815   67622 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:33:47.616821   67622 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:33:47.616827   67622 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:33:47.616832   67622 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:33:47.616874   67622 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:33:47.617290   67622 addons.go:238] Setting addon default-storageclass=true in "ha-984158"
	I0919 22:33:47.617334   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:47.617664   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.618198   67622 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:33:47.619811   67622 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:33:47.619828   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:33:47.619873   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:47.639214   67622 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:33:47.639233   67622 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:33:47.639292   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:47.639429   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:47.661245   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:47.673462   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:33:47.757401   67622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:33:47.772885   67622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:33:47.832329   67622 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:33:48.046946   67622 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:33:48.048036   67622 addons.go:514] duration metric: took 454.124749ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:33:48.048079   67622 start.go:246] waiting for cluster config update ...
	I0919 22:33:48.048094   67622 start.go:255] writing updated cluster config ...
	I0919 22:33:48.049801   67622 out.go:203] 
	I0919 22:33:48.051165   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:48.051243   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:48.053137   67622 out.go:179] * Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	I0919 22:33:48.054674   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:33:48.056311   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:33:48.057779   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:48.057806   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:33:48.057888   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:33:48.057928   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:33:48.057940   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:33:48.058063   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:48.078572   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:33:48.078592   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:33:48.078612   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:33:48.078641   67622 start.go:360] acquireMachinesLock for ha-984158-m02: {Name:mk33ccd18791cf0a87d18f7af68677fa10224c04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:33:48.078744   67622 start.go:364] duration metric: took 83.645µs to acquireMachinesLock for "ha-984158-m02"
	I0919 22:33:48.078773   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:48.078850   67622 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:33:48.081555   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:33:48.081669   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:33:48.081703   67622 client.go:168] LocalClient.Create starting
	I0919 22:33:48.081781   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:33:48.081822   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:48.081843   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:48.081910   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:33:48.081940   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:48.081960   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:48.082241   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:48.099940   67622 network_create.go:77] Found existing network {name:ha-984158 subnet:0xc0016638f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:33:48.099978   67622 kic.go:121] calculated static IP "192.168.49.3" for the "ha-984158-m02" container
	I0919 22:33:48.100047   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:33:48.119768   67622 cli_runner.go:164] Run: docker volume create ha-984158-m02 --label name.minikube.sigs.k8s.io=ha-984158-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:33:48.140861   67622 oci.go:103] Successfully created a docker volume ha-984158-m02
	I0919 22:33:48.140948   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m02 --entrypoint /usr/bin/test -v ha-984158-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:33:48.564029   67622 oci.go:107] Successfully prepared a docker volume ha-984158-m02
	I0919 22:33:48.564088   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:48.564128   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:33:48.564199   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:33:52.827364   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.263115206s)
	I0919 22:33:52.827395   67622 kic.go:203] duration metric: took 4.263265347s to extract preloaded images to volume ...
	W0919 22:33:52.827486   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:33:52.827514   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:33:52.827554   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:33:52.885075   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158-m02 --name ha-984158-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158-m02 --network ha-984158 --ip 192.168.49.3 --volume ha-984158-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:33:53.180687   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Running}}
	I0919 22:33:53.199679   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.219636   67622 cli_runner.go:164] Run: docker exec ha-984158-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:33:53.277586   67622 oci.go:144] the created container "ha-984158-m02" has a running status.
	I0919 22:33:53.277613   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa...
	I0919 22:33:53.439379   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:33:53.439435   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:33:53.481669   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.502631   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:33:53.502661   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:33:53.550818   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.569934   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:33:53.570033   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.591163   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.591567   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.591594   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:33:53.732425   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:33:53.732454   67622 ubuntu.go:182] provisioning hostname "ha-984158-m02"
	I0919 22:33:53.732620   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.753544   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.753771   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.753787   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m02 && echo "ha-984158-m02" | sudo tee /etc/hostname
	I0919 22:33:53.905778   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:33:53.905859   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.925947   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.926237   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.926262   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:33:54.064017   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:33:54.064058   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:33:54.064091   67622 ubuntu.go:190] setting up certificates
	I0919 22:33:54.064128   67622 provision.go:84] configureAuth start
	I0919 22:33:54.064205   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:54.083365   67622 provision.go:143] copyHostCerts
	I0919 22:33:54.083408   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:54.083437   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:33:54.083446   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:54.083518   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:33:54.083599   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:54.083619   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:33:54.083625   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:54.083651   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:33:54.083695   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:54.083712   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:33:54.083718   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:54.083741   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:33:54.083825   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m02 san=[127.0.0.1 192.168.49.3 ha-984158-m02 localhost minikube]
	I0919 22:33:54.283812   67622 provision.go:177] copyRemoteCerts
	I0919 22:33:54.283869   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:33:54.283908   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.302357   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:54.401996   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:33:54.402067   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:33:54.430462   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:33:54.430540   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:33:54.457015   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:33:54.457097   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:33:54.483980   67622 provision.go:87] duration metric: took 419.834494ms to configureAuth
	I0919 22:33:54.484006   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:33:54.484189   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:54.484291   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.502801   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:54.503005   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:54.503020   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:33:54.741937   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:33:54.741974   67622 machine.go:96] duration metric: took 1.172016504s to provisionDockerMachine
	I0919 22:33:54.741989   67622 client.go:171] duration metric: took 6.660276334s to LocalClient.Create
	I0919 22:33:54.742015   67622 start.go:167] duration metric: took 6.660346483s to libmachine.API.Create "ha-984158"
	I0919 22:33:54.742030   67622 start.go:293] postStartSetup for "ha-984158-m02" (driver="docker")
	I0919 22:33:54.742043   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:33:54.742141   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:33:54.742204   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.760779   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:54.861057   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:33:54.864884   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:33:54.864926   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:33:54.864936   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:33:54.864942   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:33:54.864952   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:33:54.865018   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:33:54.865096   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:33:54.865119   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:33:54.865208   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:33:54.874518   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:54.902675   67622 start.go:296] duration metric: took 160.632418ms for postStartSetup
	I0919 22:33:54.903619   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:54.921915   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:54.922275   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:54.922332   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.939498   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.032204   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:33:55.036544   67622 start.go:128] duration metric: took 6.957677622s to createHost
	I0919 22:33:55.036576   67622 start.go:83] releasing machines lock for "ha-984158-m02", held for 6.957813538s
	I0919 22:33:55.036645   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:55.056621   67622 out.go:179] * Found network options:
	I0919 22:33:55.058171   67622 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:33:55.059521   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:33:55.059575   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:33:55.059642   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:33:55.059693   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:55.059730   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:33:55.059795   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:55.079269   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.079505   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.307919   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:33:55.312965   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:55.336548   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:33:55.336628   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:55.368875   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:33:55.368896   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:33:55.368929   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:33:55.368975   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:33:55.384084   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:33:55.396627   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:33:55.396684   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:33:55.411878   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:33:55.426921   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:33:55.498750   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:33:55.574511   67622 docker.go:234] disabling docker service ...
	I0919 22:33:55.574592   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:33:55.592451   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:33:55.605407   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:33:55.676576   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:33:55.779960   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:33:55.791691   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:33:55.810222   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:33:55.810287   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.823669   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:33:55.823742   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.835957   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.848163   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.862113   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:33:55.874185   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.886226   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.904556   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.915914   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:33:55.925425   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:33:55.934730   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:56.048946   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:33:56.146544   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:33:56.146625   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:33:56.150812   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:33:56.150868   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:33:56.155192   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:33:56.191696   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:33:56.191785   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:56.233991   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:56.274090   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:33:56.275720   67622 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:33:56.276812   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:56.294583   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:33:56.298596   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:56.311418   67622 mustload.go:65] Loading cluster: ha-984158
	I0919 22:33:56.311645   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:56.311889   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:56.330141   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:56.330381   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.3
	I0919 22:33:56.330391   67622 certs.go:194] generating shared ca certs ...
	I0919 22:33:56.330404   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.330513   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:33:56.330548   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:33:56.330558   67622 certs.go:256] generating profile certs ...
	I0919 22:33:56.330645   67622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:33:56.330671   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648
	I0919 22:33:56.330686   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:33:56.589696   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 ...
	I0919 22:33:56.589724   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648: {Name:mk231e62d196ad4ac4ba36bf02a832f78de0258d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.589931   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648 ...
	I0919 22:33:56.589950   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648: {Name:mkf30412a461a8bacfd366640c7d4da1146a9418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.590056   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:33:56.590233   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:33:56.590374   67622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:33:56.590389   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:33:56.590402   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:33:56.590416   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:33:56.590429   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:33:56.590440   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:33:56.590450   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:33:56.590459   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:33:56.590476   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:33:56.590527   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:33:56.590552   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:33:56.590561   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:33:56.590584   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:33:56.590605   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:33:56.590626   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:33:56.590665   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:56.590692   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:33:56.590708   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:56.590721   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:33:56.590767   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:56.609877   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:56.698485   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:33:56.703209   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:33:56.716550   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:33:56.720735   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:33:56.733890   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:33:56.737616   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:33:56.750557   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:33:56.754948   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:33:56.770690   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:33:56.774864   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:33:56.787587   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:33:56.791154   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:33:56.804497   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:33:56.832411   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:33:56.858185   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:33:56.885311   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:33:56.911248   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:33:56.937552   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:33:56.963365   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:33:56.988811   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:33:57.014413   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:33:57.043525   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:33:57.069549   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:33:57.095993   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:33:57.115254   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:33:57.135395   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:33:57.155031   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:33:57.175220   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:33:57.194674   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:33:57.215027   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:33:57.235048   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:33:57.240702   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:33:57.251492   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.255754   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.255806   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.263388   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:33:57.274606   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:33:57.285494   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.289707   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.289758   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.296995   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:33:57.307702   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:33:57.318927   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.323131   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.323194   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.330266   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:33:57.340891   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:33:57.344726   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:33:57.344784   67622 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0919 22:33:57.344872   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:33:57.344897   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:33:57.344937   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:33:57.357462   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:57.357529   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:33:57.357582   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:33:57.367667   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:33:57.367722   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:33:57.377333   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:33:57.395969   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:33:57.418145   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:33:57.439308   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:33:57.443458   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:57.454967   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:57.522382   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:33:57.545690   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:57.545979   67622 start.go:317] joinCluster: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:57.546124   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:33:57.546185   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:57.565712   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:57.714381   67622 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:57.714452   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0rc9ka.7s4jxjfzbvya269x --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:34:14.891768   67622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0rc9ka.7s4jxjfzbvya269x --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (17.177290621s)
	I0919 22:34:14.891806   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:34:15.112649   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158-m02 minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=false
	I0919 22:34:15.189152   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-984158-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:34:15.268843   67622 start.go:319] duration metric: took 17.722860685s to joinCluster
	I0919 22:34:15.268921   67622 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:15.269212   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:15.270715   67622 out.go:179] * Verifying Kubernetes components...
	I0919 22:34:15.272193   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:15.373529   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:15.387143   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:34:15.387217   67622 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:34:15.387440   67622 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m02" to be "Ready" ...
	W0919 22:34:17.391040   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:19.391218   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:21.391885   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:23.891865   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:25.892208   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	I0919 22:34:28.391466   67622 node_ready.go:49] node "ha-984158-m02" is "Ready"
	I0919 22:34:28.391502   67622 node_ready.go:38] duration metric: took 13.004045549s for node "ha-984158-m02" to be "Ready" ...
	I0919 22:34:28.391521   67622 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:34:28.391578   67622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:34:28.403875   67622 api_server.go:72] duration metric: took 13.134915716s to wait for apiserver process to appear ...
	I0919 22:34:28.403907   67622 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:34:28.403928   67622 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:34:28.409570   67622 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:34:28.410599   67622 api_server.go:141] control plane version: v1.34.0
	I0919 22:34:28.410630   67622 api_server.go:131] duration metric: took 6.715556ms to wait for apiserver health ...
	I0919 22:34:28.410646   67622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:34:28.415646   67622 system_pods.go:59] 17 kube-system pods found
	I0919 22:34:28.415679   67622 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:34:28.415685   67622 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:34:28.415689   67622 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:34:28.415692   67622 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:34:28.415695   67622 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:34:28.415698   67622 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:34:28.415701   67622 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:34:28.415704   67622 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:34:28.415707   67622 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:34:28.415710   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:34:28.415713   67622 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:34:28.415715   67622 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:34:28.415718   67622 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:34:28.415721   67622 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:34:28.415723   67622 system_pods.go:61] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:34:28.415726   67622 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:34:28.415729   67622 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:34:28.415734   67622 system_pods.go:74] duration metric: took 5.082988ms to wait for pod list to return data ...
	I0919 22:34:28.415742   67622 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:34:28.418466   67622 default_sa.go:45] found service account: "default"
	I0919 22:34:28.418487   67622 default_sa.go:55] duration metric: took 2.73954ms for default service account to be created ...
	I0919 22:34:28.418498   67622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:34:28.422326   67622 system_pods.go:86] 17 kube-system pods found
	I0919 22:34:28.422351   67622 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:34:28.422357   67622 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:34:28.422361   67622 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:34:28.422365   67622 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:34:28.422368   67622 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:34:28.422376   67622 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:34:28.422379   67622 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:34:28.422383   67622 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:34:28.422386   67622 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:34:28.422390   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:34:28.422393   67622 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:34:28.422396   67622 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:34:28.422399   67622 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:34:28.422402   67622 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:34:28.422405   67622 system_pods.go:89] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:34:28.422408   67622 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:34:28.422415   67622 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:34:28.422421   67622 system_pods.go:126] duration metric: took 3.917676ms to wait for k8s-apps to be running ...
	I0919 22:34:28.422429   67622 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:34:28.422473   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:34:28.434607   67622 system_svc.go:56] duration metric: took 12.16943ms WaitForService to wait for kubelet
	I0919 22:34:28.434637   67622 kubeadm.go:578] duration metric: took 13.165683838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:34:28.434659   67622 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:34:28.437727   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:34:28.437756   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:34:28.437777   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:34:28.437784   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:34:28.437791   67622 node_conditions.go:105] duration metric: took 3.125214ms to run NodePressure ...
	I0919 22:34:28.437804   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:34:28.437837   67622 start.go:255] writing updated cluster config ...
	I0919 22:34:28.440033   67622 out.go:203] 
	I0919 22:34:28.441576   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:28.441673   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:28.443252   67622 out.go:179] * Starting "ha-984158-m03" control-plane node in "ha-984158" cluster
	I0919 22:34:28.444693   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:34:28.446038   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:28.447156   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:34:28.447185   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:28.447193   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:28.447285   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:28.447301   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:34:28.447448   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:28.469851   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:28.469873   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:28.469889   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:28.469913   67622 start.go:360] acquireMachinesLock for ha-984158-m03: {Name:mkf33267bff56ae1cde0b805408b7f6393558146 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:28.470008   67622 start.go:364] duration metric: took 81.331µs to acquireMachinesLock for "ha-984158-m03"
	I0919 22:34:28.470041   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:28.470170   67622 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:34:28.472544   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:34:28.472649   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:34:28.472677   67622 client.go:168] LocalClient.Create starting
	I0919 22:34:28.472742   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:34:28.472780   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:34:28.472799   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:34:28.472861   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:34:28.472888   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:34:28.472901   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:34:28.473209   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:28.490760   67622 network_create.go:77] Found existing network {name:ha-984158 subnet:0xc001af8060 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:34:28.490805   67622 kic.go:121] calculated static IP "192.168.49.4" for the "ha-984158-m03" container
	I0919 22:34:28.490880   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:34:28.509896   67622 cli_runner.go:164] Run: docker volume create ha-984158-m03 --label name.minikube.sigs.k8s.io=ha-984158-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:34:28.528837   67622 oci.go:103] Successfully created a docker volume ha-984158-m03
	I0919 22:34:28.528911   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m03 --entrypoint /usr/bin/test -v ha-984158-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:34:28.927062   67622 oci.go:107] Successfully prepared a docker volume ha-984158-m03
	I0919 22:34:28.927168   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:34:28.927199   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:34:28.927268   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:34:33.212737   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.285428249s)
	I0919 22:34:33.212770   67622 kic.go:203] duration metric: took 4.285569649s to extract preloaded images to volume ...
	W0919 22:34:33.212842   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:34:33.212868   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:34:33.212907   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:34:33.271794   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158-m03 --name ha-984158-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158-m03 --network ha-984158 --ip 192.168.49.4 --volume ha-984158-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:34:33.577096   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Running}}
	I0919 22:34:33.595112   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:33.615056   67622 cli_runner.go:164] Run: docker exec ha-984158-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:34:33.665241   67622 oci.go:144] the created container "ha-984158-m03" has a running status.
	I0919 22:34:33.665277   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa...
	I0919 22:34:34.167881   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:34:34.167925   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:34:34.195311   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:34.214983   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:34:34.215010   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:34:34.269287   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:34.290822   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:34.290917   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.310406   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.310629   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.310645   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:34.449392   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:34:34.449418   67622 ubuntu.go:182] provisioning hostname "ha-984158-m03"
	I0919 22:34:34.449477   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.470431   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.470643   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.470659   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m03 && echo "ha-984158-m03" | sudo tee /etc/hostname
	I0919 22:34:34.622394   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:34:34.622486   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.641997   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.642244   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.642262   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:34.780134   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:34.780169   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:34:34.780191   67622 ubuntu.go:190] setting up certificates
	I0919 22:34:34.780205   67622 provision.go:84] configureAuth start
	I0919 22:34:34.780271   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:34.799584   67622 provision.go:143] copyHostCerts
	I0919 22:34:34.799658   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:34:34.799692   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:34:34.799701   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:34:34.799769   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:34:34.799851   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:34:34.799870   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:34:34.799877   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:34:34.799904   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:34:34.799966   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:34:34.799983   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:34:34.799989   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:34:34.800012   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:34:34.800115   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m03 san=[127.0.0.1 192.168.49.4 ha-984158-m03 localhost minikube]
	I0919 22:34:34.944518   67622 provision.go:177] copyRemoteCerts
	I0919 22:34:34.944575   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:34.944606   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.963408   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.062939   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:35.063013   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:35.095527   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:35.095582   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:35.122809   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:35.122880   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:34:35.150023   67622 provision.go:87] duration metric: took 369.804514ms to configureAuth
	I0919 22:34:35.150056   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:35.150311   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:35.150452   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.170186   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:35.170414   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:35.170546   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:34:35.424872   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:34:35.424903   67622 machine.go:96] duration metric: took 1.1340482s to provisionDockerMachine
	I0919 22:34:35.424913   67622 client.go:171] duration metric: took 6.952229218s to LocalClient.Create
	I0919 22:34:35.424932   67622 start.go:167] duration metric: took 6.95228363s to libmachine.API.Create "ha-984158"
	I0919 22:34:35.424941   67622 start.go:293] postStartSetup for "ha-984158-m03" (driver="docker")
	I0919 22:34:35.424950   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:35.425005   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:35.425044   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.443122   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.542973   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:35.547045   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:35.547098   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:35.547140   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:35.547149   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:35.547164   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:34:35.547243   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:34:35.547346   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:34:35.547359   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:34:35.547461   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:35.557222   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:34:35.587487   67622 start.go:296] duration metric: took 162.532916ms for postStartSetup
	I0919 22:34:35.587898   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:35.605883   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:35.606188   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:35.606230   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.625506   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.719327   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:35.724945   67622 start.go:128] duration metric: took 7.25475977s to createHost
	I0919 22:34:35.724975   67622 start.go:83] releasing machines lock for "ha-984158-m03", held for 7.25495293s
	I0919 22:34:35.725066   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:35.746436   67622 out.go:179] * Found network options:
	I0919 22:34:35.748613   67622 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:34:35.750204   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750230   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750252   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750261   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:34:35.750333   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:34:35.750367   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.750414   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:35.750481   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.770785   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.771520   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:36.012617   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:36.017809   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:36.041480   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:36.041572   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:36.074662   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:34:36.074688   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:34:36.074719   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:36.074766   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:36.093544   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:36.107751   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:34:36.107801   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:34:36.123972   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:34:36.140690   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:34:36.213915   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:34:36.293890   67622 docker.go:234] disabling docker service ...
	I0919 22:34:36.293970   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:34:36.315495   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:34:36.329394   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:34:36.401603   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:34:36.566519   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:34:36.580168   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:36.598521   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:34:36.598580   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.612994   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:34:36.613052   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.625369   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.636513   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.647884   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:36.658467   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.670077   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.688463   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.700347   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:36.710192   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:36.722230   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.786818   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:34:36.889165   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:34:36.889244   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:34:36.893369   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:34:36.893434   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:34:36.897483   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:34:36.935462   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:34:36.935558   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:34:36.971682   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:34:37.011225   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:34:37.012939   67622 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:34:37.014619   67622 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:34:37.016609   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:37.035904   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:34:37.040209   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:37.053278   67622 mustload.go:65] Loading cluster: ha-984158
	I0919 22:34:37.053547   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:37.053803   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:34:37.073847   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:34:37.074139   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.4
	I0919 22:34:37.074157   67622 certs.go:194] generating shared ca certs ...
	I0919 22:34:37.074173   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.074282   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:34:37.074329   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:34:37.074340   67622 certs.go:256] generating profile certs ...
	I0919 22:34:37.074417   67622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:34:37.074441   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7
	I0919 22:34:37.074452   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:34:37.137117   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 ...
	I0919 22:34:37.137145   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7: {Name:mk19194d581061c0301a7ebaafcb4f75dd6f88da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.137332   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7 ...
	I0919 22:34:37.137346   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7: {Name:mkdc03dbd8fb2d6fc0a8ac2bb45b7aa14987fe74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.137418   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:34:37.137557   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:34:37.137679   67622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:34:37.137694   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:34:37.137706   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:34:37.137719   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:34:37.137732   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:34:37.137744   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:34:37.137756   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:34:37.137768   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:34:37.137780   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:34:37.137836   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:34:37.137865   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:34:37.137875   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:34:37.137895   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:34:37.137918   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:34:37.137950   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:34:37.137989   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:34:37.138014   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.138027   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.138042   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.138089   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:34:37.156562   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:34:37.245522   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:34:37.249874   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:34:37.263553   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:34:37.267840   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:34:37.282009   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:34:37.286008   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:34:37.299365   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:34:37.303011   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:34:37.316000   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:34:37.319968   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:34:37.335075   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:34:37.339209   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:34:37.352485   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:34:37.379736   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:34:37.405614   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:34:37.430819   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:34:37.457286   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:34:37.485582   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:34:37.511990   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:34:37.539620   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:34:37.566336   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:34:37.597966   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:34:37.624934   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:34:37.652281   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:34:37.672835   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:34:37.693826   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:34:37.712995   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:34:37.735150   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:34:37.755380   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:34:37.775695   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:34:37.796705   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:34:37.802715   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:34:37.814531   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.819194   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.819264   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.826904   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:34:37.838758   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:34:37.849465   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.853251   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.853305   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.860596   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:34:37.872602   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:34:37.885280   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.889622   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.889680   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.896943   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:34:37.908337   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:34:37.912368   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:34:37.912422   67622 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0919 22:34:37.912521   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:34:37.912549   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:34:37.912589   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:34:37.927225   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:37.927295   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:34:37.927349   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:34:37.937175   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:34:37.937241   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:34:37.946525   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:34:37.966151   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:34:37.991832   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:34:38.014409   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:34:38.018813   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:38.034487   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:38.100010   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:38.123308   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:34:38.123594   67622 start.go:317] joinCluster: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:38.123717   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:34:38.123769   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:34:38.144625   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:34:38.293340   67622 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:38.293387   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xvegph.tfd7m7k591l3snif --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:34:51.872651   67622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xvegph.tfd7m7k591l3snif --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (13.579238089s)
	I0919 22:34:51.872690   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:34:52.127072   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158-m03 minikube.k8s.io/updated_at=2025_09_19T22_34_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=false
	I0919 22:34:52.206869   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-984158-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:34:52.293044   67622 start.go:319] duration metric: took 14.169442875s to joinCluster
	I0919 22:34:52.293202   67622 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:52.293464   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:52.295014   67622 out.go:179] * Verifying Kubernetes components...
	I0919 22:34:52.296471   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:52.405642   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:52.419776   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:34:52.419840   67622 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:34:52.420054   67622 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m03" to be "Ready" ...
	W0919 22:34:54.424074   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:34:56.924240   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:34:58.925198   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:35:01.425329   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:35:03.923474   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	I0919 22:35:05.424225   67622 node_ready.go:49] node "ha-984158-m03" is "Ready"
	I0919 22:35:05.424253   67622 node_ready.go:38] duration metric: took 13.004161929s for node "ha-984158-m03" to be "Ready" ...
	I0919 22:35:05.424266   67622 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:35:05.424326   67622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:05.438342   67622 api_server.go:72] duration metric: took 13.14509411s to wait for apiserver process to appear ...
	I0919 22:35:05.438367   67622 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:35:05.438390   67622 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:35:05.442575   67622 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:35:05.443547   67622 api_server.go:141] control plane version: v1.34.0
	I0919 22:35:05.443573   67622 api_server.go:131] duration metric: took 5.19876ms to wait for apiserver health ...
	I0919 22:35:05.443582   67622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:35:05.452030   67622 system_pods.go:59] 24 kube-system pods found
	I0919 22:35:05.452062   67622 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:35:05.452067   67622 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:35:05.452073   67622 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:35:05.452079   67622 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:35:05.452084   67622 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:35:05.452089   67622 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:35:05.452094   67622 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:35:05.452129   67622 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:35:05.452136   67622 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:35:05.452141   67622 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:35:05.452146   67622 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:35:05.452151   67622 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:35:05.452156   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:35:05.452161   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:35:05.452165   67622 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:35:05.452170   67622 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:35:05.452174   67622 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:35:05.452179   67622 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:35:05.452184   67622 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:35:05.452188   67622 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:35:05.452193   67622 system_pods.go:61] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:35:05.452199   67622 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:35:05.452205   67622 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:35:05.452208   67622 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:35:05.452217   67622 system_pods.go:74] duration metric: took 8.62798ms to wait for pod list to return data ...
	I0919 22:35:05.452227   67622 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:35:05.455571   67622 default_sa.go:45] found service account: "default"
	I0919 22:35:05.455594   67622 default_sa.go:55] duration metric: took 3.361804ms for default service account to be created ...
	I0919 22:35:05.455603   67622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:35:05.460748   67622 system_pods.go:86] 24 kube-system pods found
	I0919 22:35:05.460777   67622 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:35:05.460783   67622 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:35:05.460787   67622 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:35:05.460790   67622 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:35:05.460793   67622 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:35:05.460798   67622 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:35:05.460801   67622 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:35:05.460803   67622 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:35:05.460806   67622 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:35:05.460809   67622 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:35:05.460812   67622 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:35:05.460815   67622 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:35:05.460818   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:35:05.460821   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:35:05.460826   67622 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:35:05.460829   67622 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:35:05.460832   67622 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:35:05.460835   67622 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:35:05.460838   67622 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:35:05.460841   67622 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:35:05.460844   67622 system_pods.go:89] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:35:05.460847   67622 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:35:05.460850   67622 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:35:05.460853   67622 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:35:05.460859   67622 system_pods.go:126] duration metric: took 5.251911ms to wait for k8s-apps to be running ...
	I0919 22:35:05.460866   67622 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:35:05.460906   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:35:05.475728   67622 system_svc.go:56] duration metric: took 14.850569ms WaitForService to wait for kubelet
	I0919 22:35:05.475767   67622 kubeadm.go:578] duration metric: took 13.182524274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:35:05.475791   67622 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:35:05.479992   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480016   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480028   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480032   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480035   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480038   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480042   67622 node_conditions.go:105] duration metric: took 4.246099ms to run NodePressure ...
	I0919 22:35:05.480052   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:35:05.480076   67622 start.go:255] writing updated cluster config ...
	I0919 22:35:05.480391   67622 ssh_runner.go:195] Run: rm -f paused
	I0919 22:35:05.484443   67622 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:35:05.484864   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:35:05.488632   67622 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gnbx" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.494158   67622 pod_ready.go:94] pod "coredns-66bc5c9577-5gnbx" is "Ready"
	I0919 22:35:05.494184   67622 pod_ready.go:86] duration metric: took 5.519921ms for pod "coredns-66bc5c9577-5gnbx" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.494194   67622 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ltjmz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.498979   67622 pod_ready.go:94] pod "coredns-66bc5c9577-ltjmz" is "Ready"
	I0919 22:35:05.499001   67622 pod_ready.go:86] duration metric: took 4.801852ms for pod "coredns-66bc5c9577-ltjmz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.501488   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.506605   67622 pod_ready.go:94] pod "etcd-ha-984158" is "Ready"
	I0919 22:35:05.506631   67622 pod_ready.go:86] duration metric: took 5.121241ms for pod "etcd-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.506643   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.511687   67622 pod_ready.go:94] pod "etcd-ha-984158-m02" is "Ready"
	I0919 22:35:05.511711   67622 pod_ready.go:86] duration metric: took 5.063338ms for pod "etcd-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.511721   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.686203   67622 request.go:683] "Waited before sending request" delay="174.390617ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-984158-m03"
	I0919 22:35:05.886318   67622 request.go:683] "Waited before sending request" delay="196.323175ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:05.889520   67622 pod_ready.go:94] pod "etcd-ha-984158-m03" is "Ready"
	I0919 22:35:05.889544   67622 pod_ready.go:86] duration metric: took 377.817661ms for pod "etcd-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.086145   67622 request.go:683] "Waited before sending request" delay="196.407438ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:35:06.090017   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.285426   67622 request.go:683] "Waited before sending request" delay="195.307128ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158"
	I0919 22:35:06.486234   67622 request.go:683] "Waited before sending request" delay="197.363102ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:06.489211   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158" is "Ready"
	I0919 22:35:06.489239   67622 pod_ready.go:86] duration metric: took 399.19471ms for pod "kube-apiserver-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.489249   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.685697   67622 request.go:683] "Waited before sending request" delay="196.373047ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158-m02"
	I0919 22:35:06.885918   67622 request.go:683] "Waited before sending request" delay="197.214097ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:06.888940   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158-m02" is "Ready"
	I0919 22:35:06.888966   67622 pod_ready.go:86] duration metric: took 399.709223ms for pod "kube-apiserver-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.888977   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.086320   67622 request.go:683] "Waited before sending request" delay="197.234187ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158-m03"
	I0919 22:35:07.286155   67622 request.go:683] "Waited before sending request" delay="196.391562ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:07.289116   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158-m03" is "Ready"
	I0919 22:35:07.289145   67622 pod_ready.go:86] duration metric: took 400.160627ms for pod "kube-apiserver-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.485647   67622 request.go:683] "Waited before sending request" delay="196.369215ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0919 22:35:07.489356   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.685801   67622 request.go:683] "Waited before sending request" delay="196.331241ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158"
	I0919 22:35:07.886175   67622 request.go:683] "Waited before sending request" delay="197.36953ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:07.889268   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158" is "Ready"
	I0919 22:35:07.889292   67622 pod_ready.go:86] duration metric: took 399.911799ms for pod "kube-controller-manager-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.889300   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.085780   67622 request.go:683] "Waited before sending request" delay="196.397628ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158-m02"
	I0919 22:35:08.286293   67622 request.go:683] "Waited before sending request" delay="197.157746ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:08.289542   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158-m02" is "Ready"
	I0919 22:35:08.289565   67622 pod_ready.go:86] duration metric: took 400.260559ms for pod "kube-controller-manager-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.289585   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.486054   67622 request.go:683] "Waited before sending request" delay="196.383406ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158-m03"
	I0919 22:35:08.685765   67622 request.go:683] "Waited before sending request" delay="196.365381ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:08.688911   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158-m03" is "Ready"
	I0919 22:35:08.688939   67622 pod_ready.go:86] duration metric: took 399.348524ms for pod "kube-controller-manager-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.885240   67622 request.go:683] "Waited before sending request" delay="196.197284ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:35:08.888653   67622 pod_ready.go:83] waiting for pod "kube-proxy-hdxxn" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.086194   67622 request.go:683] "Waited before sending request" delay="197.430633ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hdxxn"
	I0919 22:35:09.285936   67622 request.go:683] "Waited before sending request" delay="196.399441ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:09.289309   67622 pod_ready.go:94] pod "kube-proxy-hdxxn" is "Ready"
	I0919 22:35:09.289344   67622 pod_ready.go:86] duration metric: took 400.666867ms for pod "kube-proxy-hdxxn" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.289356   67622 pod_ready.go:83] waiting for pod "kube-proxy-k2drm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.485857   67622 request.go:683] "Waited before sending request" delay="196.368869ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k2drm"
	I0919 22:35:09.685224   67622 request.go:683] "Waited before sending request" delay="196.312304ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:09.688202   67622 pod_ready.go:94] pod "kube-proxy-k2drm" is "Ready"
	I0919 22:35:09.688225   67622 pod_ready.go:86] duration metric: took 398.86315ms for pod "kube-proxy-k2drm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.688232   67622 pod_ready.go:83] waiting for pod "kube-proxy-plrn2" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.885674   67622 request.go:683] "Waited before sending request" delay="197.37394ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-plrn2"
	I0919 22:35:10.085404   67622 request.go:683] "Waited before sending request" delay="196.238234ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:10.088413   67622 pod_ready.go:94] pod "kube-proxy-plrn2" is "Ready"
	I0919 22:35:10.088435   67622 pod_ready.go:86] duration metric: took 400.198021ms for pod "kube-proxy-plrn2" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.285955   67622 request.go:683] "Waited before sending request" delay="197.399738ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0919 22:35:10.289773   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.486274   67622 request.go:683] "Waited before sending request" delay="196.397415ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158"
	I0919 22:35:10.685865   67622 request.go:683] "Waited before sending request" delay="196.354476ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:10.688789   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158" is "Ready"
	I0919 22:35:10.688812   67622 pod_ready.go:86] duration metric: took 399.015441ms for pod "kube-scheduler-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.688821   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.886266   67622 request.go:683] "Waited before sending request" delay="197.365068ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158-m02"
	I0919 22:35:11.085685   67622 request.go:683] "Waited before sending request" delay="196.401015ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:11.088847   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158-m02" is "Ready"
	I0919 22:35:11.088884   67622 pod_ready.go:86] duration metric: took 400.056175ms for pod "kube-scheduler-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.088895   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.285309   67622 request.go:683] "Waited before sending request" delay="196.306548ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158-m03"
	I0919 22:35:11.485951   67622 request.go:683] "Waited before sending request" delay="197.396443ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:11.489000   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158-m03" is "Ready"
	I0919 22:35:11.489026   67622 pod_ready.go:86] duration metric: took 400.124566ms for pod "kube-scheduler-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.489036   67622 pod_ready.go:40] duration metric: took 6.004562578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:35:11.533521   67622 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:35:11.535265   67622 out.go:179] * Done! kubectl is now configured to use "ha-984158" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 22:33:59 ha-984158 crio[940]: time="2025-09-19 22:33:59.550284463Z" level=info msg="Starting container: ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a" id=e0a3358c-8796-408f-934f-d6cba020a690 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:33:59 ha-984158 crio[940]: time="2025-09-19 22:33:59.559054866Z" level=info msg="Started container" PID=2323 containerID=ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a description=kube-system/coredns-66bc5c9577-5gnbx/coredns id=e0a3358c-8796-408f-934f-d6cba020a690 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a67546437e6cd1431d56127b35c686ec4fbef541821d81e817187eac2eac44ae
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.844458340Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-rnjl7/POD" id=d0657219-f572-4248-9235-8842218cfa0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.844519430Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.863307191Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-rnjl7 Namespace:default ID:310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 UID:68cd1643-e7c7-480f-af91-8f2f4eafb766 NetNS:/var/run/netns/06be5280-8181-487d-a6d1-f625eae461d3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.863361143Z" level=info msg="Adding pod default_busybox-7b57f96db7-rnjl7 to CNI network \"kindnet\" (type=ptp)"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.877409166Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-rnjl7 Namespace:default ID:310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 UID:68cd1643-e7c7-480f-af91-8f2f4eafb766 NetNS:/var/run/netns/06be5280-8181-487d-a6d1-f625eae461d3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.877580199Z" level=info msg="Checking pod default_busybox-7b57f96db7-rnjl7 for CNI network kindnet (type=ptp)"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.878483692Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.879359170Z" level=info msg="Ran pod sandbox 310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 with infra container: default/busybox-7b57f96db7-rnjl7/POD" id=d0657219-f572-4248-9235-8842218cfa0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.880607012Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1735f4c5-1314-4a40-8ba8-c3ad07521ed5 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.880856313Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=1735f4c5-1314-4a40-8ba8-c3ad07521ed5 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.881636849Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=7ea2e14f-0929-48b6-8660-f50891d76427 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.882840066Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:35:13 ha-984158 crio[940]: time="2025-09-19 22:35:13.826935593Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.299818076Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=7ea2e14f-0929-48b6-8660-f50891d76427 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.300497300Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=93a0214d-e907-4422-9d10-19ea7fc4e56f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.301041675Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=93a0214d-e907-4422-9d10-19ea7fc4e56f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.301798545Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=0a8490eb-33d4-479b-9676-b4224390f69a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.302421301Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0a8490eb-33d4-479b-9676-b4224390f69a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.305168065Z" level=info msg="Creating container: default/busybox-7b57f96db7-rnjl7/busybox" id=3cab5b69-2469-4018-a242-e29452d9df66 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.305267569Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.380968697Z" level=info msg="Created container 9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e: default/busybox-7b57f96db7-rnjl7/busybox" id=3cab5b69-2469-4018-a242-e29452d9df66 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.381641384Z" level=info msg="Starting container: 9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e" id=796c6084-24c1-4536-af4f-844053cc1347 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.388597470Z" level=info msg="Started container" PID=2560 containerID=9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e description=default/busybox-7b57f96db7-rnjl7/busybox id=796c6084-24c1-4536-af4f-844053cc1347 name=/runtime.v1.RuntimeService/StartContainer sandboxID=310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9169b9b095a98       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   About a minute ago   Running             busybox                   0                   310dd81aa6739       busybox-7b57f96db7-rnjl7
	ea03ecb87a050       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago        Running             coredns                   0                   a67546437e6cd       coredns-66bc5c9577-5gnbx
	d9aec8cde801c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Running             storage-provisioner       0                   f2f4dad3060cd       storage-provisioner
	7df7251c31862       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      2 minutes ago        Running             coredns                   0                   549805b340720       coredns-66bc5c9577-ltjmz
	66e8ff6b4b2da       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      2 minutes ago        Running             kindnet-cni               0                   ca0bb4eb3a856       kindnet-rd882
	c90c0cf2d2e8d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      2 minutes ago        Running             kube-proxy                0                   6de94aa7ba9e1       kube-proxy-hdxxn
	6b6a81f4f6b23       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     2 minutes ago        Running             kube-vip                  0                   fba7b712cd4d4       kube-vip-ha-984158
	ccf53f9534beb       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      2 minutes ago        Running             kube-controller-manager   0                   15b128d3c6aed       kube-controller-manager-ha-984158
	01cd32d6daeeb       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      2 minutes ago        Running             kube-scheduler            0                   d854ebb188beb       kube-scheduler-ha-984158
	fda65fdd5e2b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      2 minutes ago        Running             etcd                      0                   9e61b75f9a67d       etcd-ha-984158
	8ed4a5888320b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      2 minutes ago        Running             kube-apiserver            0                   f7a2c4489feba       kube-apiserver-ha-984158
	
	
	==> coredns [7df7251c318624785e44160ab98a256321ca02663ac3f38b31058625169e65cf] <==
	[INFO] 10.244.1.2:34043 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.006963816s
	[INFO] 10.244.1.2:38425 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137951s
	[INFO] 10.244.2.2:51391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001353s
	[INFO] 10.244.2.2:50788 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010898214s
	[INFO] 10.244.2.2:57984 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165964s
	[INFO] 10.244.2.2:46802 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00010628s
	[INFO] 10.244.2.2:56859 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133945s
	[INFO] 10.244.0.4:44778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139187s
	[INFO] 10.244.0.4:52371 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149879s
	[INFO] 10.244.0.4:44391 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012178s
	[INFO] 10.244.0.4:42322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090724s
	[INFO] 10.244.1.2:47486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152861s
	[INFO] 10.244.1.2:33837 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197948s
	[INFO] 10.244.2.2:57569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187028s
	[INFO] 10.244.2.2:49299 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000201838s
	[INFO] 10.244.2.2:56021 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115909s
	[INFO] 10.244.0.4:58940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136946s
	[INFO] 10.244.0.4:36648 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142402s
	[INFO] 10.244.1.2:54958 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137478s
	[INFO] 10.244.1.2:49367 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111679s
	[INFO] 10.244.2.2:37477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176669s
	[INFO] 10.244.2.2:37006 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082361s
	[INFO] 10.244.0.4:52297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131909s
	[INFO] 10.244.0.4:59935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000069811s
	[INFO] 10.244.0.4:50031 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000124505s
	
	
	==> coredns [ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a] <==
	[INFO] 10.244.2.2:33714 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159773s
	[INFO] 10.244.2.2:40292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00009881s
	[INFO] 10.244.2.2:39630 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000811472s
	[INFO] 10.244.0.4:43002 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000112134s
	[INFO] 10.244.0.4:40782 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.000094347s
	[INFO] 10.244.1.2:36510 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033427373s
	[INFO] 10.244.1.2:41816 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158466s
	[INFO] 10.244.1.2:43260 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193529s
	[INFO] 10.244.2.2:48795 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161887s
	[INFO] 10.244.2.2:46683 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133363s
	[INFO] 10.244.2.2:56162 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135264s
	[INFO] 10.244.0.4:60293 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000085933s
	[INFO] 10.244.0.4:50296 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010728706s
	[INFO] 10.244.0.4:42098 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170789s
	[INFO] 10.244.0.4:50435 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154329s
	[INFO] 10.244.1.2:49298 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184582s
	[INFO] 10.244.1.2:58606 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110603s
	[INFO] 10.244.2.2:33122 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186581s
	[INFO] 10.244.0.4:51847 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155018s
	[INFO] 10.244.0.4:49360 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091433s
	[INFO] 10.244.1.2:44523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150525s
	[INFO] 10.244.1.2:48087 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154066s
	[INFO] 10.244.2.2:47219 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124336s
	[INFO] 10.244.2.2:58889 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148273s
	[INFO] 10.244.0.4:47101 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088754s
	
	
	==> describe nodes <==
	Name:               ha-984158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:33:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:36:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-984158
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 39160f7d8b9f44c18aede41e4d267fbd
	  System UUID:                e5418393-d7bf-429a-8ff0-9daee26920dd
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rnjl7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 coredns-66bc5c9577-5gnbx             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m35s
	  kube-system                 coredns-66bc5c9577-ltjmz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m35s
	  kube-system                 etcd-ha-984158                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m38s
	  kube-system                 kindnet-rd882                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m36s
	  kube-system                 kube-apiserver-ha-984158             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-controller-manager-ha-984158    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m38s
	  kube-system                 kube-proxy-hdxxn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  kube-system                 kube-scheduler-ha-984158             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m41s
	  kube-system                 kube-vip-ha-984158                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m43s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m34s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m33s                  kube-proxy       
	  Normal  NodeHasSufficientPID     2m46s (x8 over 2m46s)  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m46s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m46s (x8 over 2m46s)  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m46s (x8 over 2m46s)  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m38s                  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m38s                  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m38s                  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m37s                  node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  NodeReady                2m23s                  kubelet          Node ha-984158 status is now: NodeReady
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           88s                    node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	
	
	Name:               ha-984158-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:35:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:36 +0000   Fri, 19 Sep 2025 22:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-984158-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 d32b005f3b5146359774fcbe4364b90b
	  System UUID:                370c0cbf-a33c-464e-aad2-0ef3d76b4ebb
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8s7jn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 etcd-ha-984158-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-th979                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m8s
	  kube-system                 kube-apiserver-ha-984158-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-ha-984158-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-plrn2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m8s
	  kube-system                 kube-scheduler-ha-984158-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-vip-ha-984158-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        2m4s  kube-proxy       
	  Normal  RegisteredNode  2m7s  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode  2m4s  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode  88s   node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	
	
	Name:               ha-984158-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:36:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:35:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-984158-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 038f6eff3d614d78917c49afbf40a4e7
	  System UUID:                a60f86ef-6d86-4217-85ca-ad02416ddc34
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c7qf4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         70s
	  kube-system                 etcd-ha-984158-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         89s
	  kube-system                 kindnet-269nt                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      91s
	  kube-system                 kube-apiserver-ha-984158-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-controller-manager-ha-984158-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-k2drm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-ha-984158-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-vip-ha-984158-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age   From             Message
	  ----    ------          ----  ----             -------
	  Normal  Starting        87s   kube-proxy       
	  Normal  RegisteredNode  89s   node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode  88s   node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode  86s   node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	
	
	==> dmesg <==
	[  +0.103037] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029723] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.096733] kauditd_printk_skb: 47 callbacks suppressed
	[Sep19 22:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.041768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.022949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023825] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	
	
	==> etcd [fda65fdd5e2b890fe6940cd0f6b5afae54775a44a1e30b23dc514a1ea4a5dd4c] <==
	{"level":"info","ts":"2025-09-19T22:35:12.622830Z","caller":"etcdserver/server.go:2246","msg":"skip compaction since there is an inflight snapshot"}
	{"level":"info","ts":"2025-09-19T22:35:12.851680Z","caller":"etcdserver/server.go:1856","msg":"sent merged snapshot","from":"aec36adc501070cc","to":"e8495135083f8257","bytes":1479617,"size":"1.5 MB","took":"30.017342016s"}
	{"level":"info","ts":"2025-09-19T22:35:40.335511Z","caller":"traceutil/trace.go:172","msg":"trace[580727823] transaction","detail":"{read_only:false; response_revision:1018; number_of_response:1; }","duration":"128.447767ms","start":"2025-09-19T22:35:40.207051Z","end":"2025-09-19T22:35:40.335498Z","steps":["trace[580727823] 'process raft request'  (duration: 128.303588ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:35:40.335758Z","caller":"traceutil/trace.go:172","msg":"trace[1969207353] linearizableReadLoop","detail":"{readStateIndex:1194; appliedIndex:1195; }","duration":"117.354033ms","start":"2025-09-19T22:35:40.218388Z","end":"2025-09-19T22:35:40.335742Z","steps":["trace[1969207353] 'read index received'  (duration: 117.348211ms)","trace[1969207353] 'applied index is now lower than readState.Index'  (duration: 4.715µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:35:40.335880Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"117.473932ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/certificatesigningrequests\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:35:40.335910Z","caller":"traceutil/trace.go:172","msg":"trace[12563226] range","detail":"{range_begin:/registry/certificatesigningrequests; range_end:; response_count:0; response_revision:1018; }","duration":"117.51944ms","start":"2025-09-19T22:35:40.218383Z","end":"2025-09-19T22:35:40.335902Z","steps":["trace[12563226] 'agreement among raft nodes before linearized reading'  (duration: 117.444854ms)"],"step_count":1}
	{"level":"info","ts":"2025-09-19T22:35:41.265249Z","caller":"traceutil/trace.go:172","msg":"trace[1252869991] linearizableReadLoop","detail":"{readStateIndex:1199; appliedIndex:1199; }","duration":"121.843359ms","start":"2025-09-19T22:35:41.143386Z","end":"2025-09-19T22:35:41.265229Z","steps":["trace[1252869991] 'read index received'  (duration: 121.835594ms)","trace[1252869991] 'applied index is now lower than readState.Index'  (duration: 6.337µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:35:41.398137Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"254.71266ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/poddisruptionbudgets\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:35:41.398198Z","caller":"traceutil/trace.go:172","msg":"trace[1812653205] range","detail":"{range_begin:/registry/poddisruptionbudgets; range_end:; response_count:0; response_revision:1020; }","duration":"254.803848ms","start":"2025-09-19T22:35:41.143376Z","end":"2025-09-19T22:35:41.398180Z","steps":["trace[1812653205] 'agreement among raft nodes before linearized reading'  (duration: 121.941063ms)","trace[1812653205] 'range keys from in-memory index tree'  (duration: 132.739969ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T22:35:41.398804Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.156113ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6221891540473536501 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.49.3\" mod_revision:996 > success:<request_put:<key:\"/registry/masterleases/192.168.49.3\" value_size:65 lease:6221891540473536499 >> failure:<>>","response":"size:16"}
	{"level":"warn","ts":"2025-09-19T22:35:41.658165Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e8495135083f8257","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"21.83656ms"}
	{"level":"warn","ts":"2025-09-19T22:35:41.658213Z","caller":"etcdserver/raft.go:387","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"63b66b54cc365658","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"21.890877ms"}
	{"level":"warn","ts":"2025-09-19T22:35:41.659958Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.463182ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T22:35:41.660011Z","caller":"traceutil/trace.go:172","msg":"trace[1201229941] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1022; }","duration":"114.533322ms","start":"2025-09-19T22:35:41.545465Z","end":"2025-09-19T22:35:41.659998Z","steps":["trace[1201229941] 'range keys from in-memory index tree'  (duration: 114.424434ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T22:36:08.429645Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:36:08.430092Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:36:08.436299Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"63b66b54cc365658","error":"failed to dial 63b66b54cc365658 on stream MsgApp v2 (EOF)"}
	{"level":"warn","ts":"2025-09-19T22:36:08.546014Z","caller":"rafthttp/stream.go:222","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658"}
	{"level":"warn","ts":"2025-09-19T22:36:09.539212Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"63b66b54cc365658","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:36:09.539269Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"63b66b54cc365658","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:36:12.487987Z","caller":"rafthttp/stream.go:193","msg":"lost TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658"}
	{"level":"warn","ts":"2025-09-19T22:36:13.540072Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"63b66b54cc365658","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:36:13.540142Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"63b66b54cc365658","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:36:17.541366Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"63b66b54cc365658","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	{"level":"warn","ts":"2025-09-19T22:36:17.541416Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"63b66b54cc365658","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: connect: connection refused"}
	
	
	==> kernel <==
	 22:36:22 up  1:18,  0 users,  load average: 1.32, 0.72, 0.50
	Linux ha-984158 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [66e8ff6b4b2da8ea01c46a247aa4714a90f2ed1d2ba051443dc7790f7f9aa6d2] <==
	I0919 22:35:38.711602       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:35:48.710009       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:48.710041       1 main.go:301] handling current node
	I0919 22:35:48.710057       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:35:48.710061       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:35:48.710325       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:35:48.710351       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:35:58.715188       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:35:58.715226       1 main.go:301] handling current node
	I0919 22:35:58.715243       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:35:58.715250       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:35:58.715473       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:35:58.715492       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:36:08.715277       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:36:08.715323       1 main.go:301] handling current node
	I0919 22:36:08.715344       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:36:08.715353       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:36:08.715546       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:36:08.715558       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:36:18.719723       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:36:18.719758       1 main.go:301] handling current node
	I0919 22:36:18.719774       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:36:18.719779       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:36:18.720046       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:36:18.720063       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8ed4a5888320b17174d5fd3227517f4c634bc157381bb9771474bfa5169aab2f] <==
	I0919 22:33:44.107869       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:33:45.993421       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I0919 22:33:46.743338       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:33:46.796068       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:33:46.799874       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:34:55.461764       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:00.508368       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:35:16.679730       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50288: use of closed network connection
	E0919 22:35:16.855038       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50310: use of closed network connection
	E0919 22:35:17.030728       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50338: use of closed network connection
	E0919 22:35:17.243171       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50346: use of closed network connection
	E0919 22:35:17.421526       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50372: use of closed network connection
	E0919 22:35:17.591329       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50402: use of closed network connection
	E0919 22:35:17.761924       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50422: use of closed network connection
	E0919 22:35:17.931932       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50438: use of closed network connection
	E0919 22:35:18.091452       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50456: use of closed network connection
	E0919 22:35:18.368592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50480: use of closed network connection
	E0919 22:35:18.524781       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50484: use of closed network connection
	E0919 22:35:18.691736       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50510: use of closed network connection
	E0919 22:35:18.869219       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50534: use of closed network connection
	E0919 22:35:19.030842       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50552: use of closed network connection
	E0919 22:35:19.201169       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50566: use of closed network connection
	I0919 22:36:01.868494       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:02.874315       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 22:36:20.677007       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	
	
	==> kube-controller-manager [ccf53f9534beb8a8c8742cb5e71e0540bfd9bc439877b525756c21d5eef9b422] <==
	I0919 22:33:45.991296       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:33:45.991359       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:33:45.991661       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:33:45.992619       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:33:45.992661       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:33:45.992715       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:33:45.992824       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:33:45.992860       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 22:33:45.992945       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158"
	I0919 22:33:45.992988       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0919 22:33:45.994081       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0919 22:33:45.994164       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:33:45.997463       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:33:46.000645       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 22:33:46.007588       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 22:33:46.014824       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:33:46.019019       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:34:00.995932       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0919 22:34:13.994601       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-f5gnl failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-f5gnl\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:34:14.552916       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-984158-m02\" does not exist"
	I0919 22:34:14.582362       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-984158-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:34:15.998546       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m02"
	I0919 22:34:51.526332       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-984158-m03\" does not exist"
	I0919 22:34:51.541723       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-984158-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:34:56.108424       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m03"
	
	
	==> kube-proxy [c90c0cf2d2e8d28017db69b5b6570bb146918d86f62813e08b6cf30633aabf39] <==
	I0919 22:33:48.275684       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:33:48.343595       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:33:48.444904       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:33:48.444958       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:33:48.445144       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:33:48.471588       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:33:48.471666       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:33:48.477726       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:33:48.478178       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:33:48.478219       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:33:48.480033       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:33:48.480053       1 config.go:200] "Starting service config controller"
	I0919 22:33:48.480068       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:33:48.480085       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:33:48.482031       1 config.go:309] "Starting node config controller"
	I0919 22:33:48.482049       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:33:48.482057       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:33:48.480508       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:33:48.482857       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:33:48.580234       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:33:48.582666       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:33:48.583733       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [01cd32d6daeeb8f86625ec5d90712811aa7cc0b7dee503e21a57e8bd093892cc] <==
	E0919 22:33:39.908093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:33:39.911081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:33:39.988409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 22:33:40.028297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:33:40.063508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:33:40.098835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:33:40.219678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 22:33:40.224737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:33:40.235874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:33:40.301093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0919 22:33:42.406311       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:34:14.584511       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-plrn2\": pod kube-proxy-plrn2 is already assigned to node \"ha-984158-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-plrn2" node="ha-984158-m02"
	E0919 22:34:14.584664       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-plrn2\": pod kube-proxy-plrn2 is already assigned to node \"ha-984158-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-plrn2"
	E0919 22:34:51.565644       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-k2drm\": pod kube-proxy-k2drm is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-k2drm" node="ha-984158-m03"
	E0919 22:34:51.565863       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 040bf3f7-8d97-4799-b3a2-12b57eec38ef(kube-system/kube-proxy-k2drm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-k2drm"
	E0919 22:34:51.565922       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-k2drm\": pod kube-proxy-k2drm is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-k2drm"
	E0919 22:34:51.565851       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tqv25\": pod kube-proxy-tqv25 is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tqv25" node="ha-984158-m03"
	E0919 22:34:51.565999       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 6db503ca-eaf1-4ffc-8418-f778e65529c9(kube-system/kube-proxy-tqv25) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-tqv25"
	E0919 22:34:51.565619       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gtv88\": pod kindnet-gtv88 is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-gtv88" node="ha-984158-m03"
	E0919 22:34:51.566066       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 2040513e-991f-4c82-9b69-1e3fa299841a(kube-system/kindnet-gtv88) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-gtv88"
	E0919 22:34:51.568208       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tqv25\": pod kube-proxy-tqv25 is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-tqv25"
	I0919 22:34:51.568393       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tqv25" node="ha-984158-m03"
	I0919 22:34:51.568363       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-k2drm" node="ha-984158-m03"
	E0919 22:34:51.568334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gtv88\": pod kindnet-gtv88 is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kindnet-gtv88"
	I0919 22:34:51.574210       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gtv88" node="ha-984158-m03"
	
	
	==> kubelet <==
	Sep 19 22:34:23 ha-984158 kubelet[1691]: E0919 22:34:23.926836    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321263926568823  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:33 ha-984158 kubelet[1691]: E0919 22:34:33.928784    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321273928474652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:33 ha-984158 kubelet[1691]: E0919 22:34:33.928816    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321273928474652  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:43 ha-984158 kubelet[1691]: E0919 22:34:43.930936    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321283930660810  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:43 ha-984158 kubelet[1691]: E0919 22:34:43.931007    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321283930660810  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:53 ha-984158 kubelet[1691]: E0919 22:34:53.932414    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321293932160714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:34:53 ha-984158 kubelet[1691]: E0919 22:34:53.932450    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321293932160714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:03 ha-984158 kubelet[1691]: E0919 22:35:03.934355    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321303934004965  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:03 ha-984158 kubelet[1691]: E0919 22:35:03.934407    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321303934004965  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:12 ha-984158 kubelet[1691]: I0919 22:35:12.604999    1691 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-984pg\" (UniqueName: \"kubernetes.io/projected/68cd1643-e7c7-480f-af91-8f2f4eafb766-kube-api-access-984pg\") pod \"busybox-7b57f96db7-rnjl7\" (UID: \"68cd1643-e7c7-480f-af91-8f2f4eafb766\") " pod="default/busybox-7b57f96db7-rnjl7"
	Sep 19 22:35:13 ha-984158 kubelet[1691]: E0919 22:35:13.935689    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321313935476454  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:13 ha-984158 kubelet[1691]: E0919 22:35:13.935726    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321313935476454  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:140135}  inodes_used:{value:63}}"
	Sep 19 22:35:19 ha-984158 kubelet[1691]: E0919 22:35:19.030824    1691 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:40998->127.0.0.1:37933: write tcp 127.0.0.1:40998->127.0.0.1:37933: write: broken pipe
	Sep 19 22:35:23 ha-984158 kubelet[1691]: E0919 22:35:23.937510    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321323937255941  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:23 ha-984158 kubelet[1691]: E0919 22:35:23.937554    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321323937255941  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:33 ha-984158 kubelet[1691]: E0919 22:35:33.938855    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321333938596677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:33 ha-984158 kubelet[1691]: E0919 22:35:33.938899    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321333938596677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:43 ha-984158 kubelet[1691]: E0919 22:35:43.940553    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321343940230113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:43 ha-984158 kubelet[1691]: E0919 22:35:43.940595    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321343940230113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:53 ha-984158 kubelet[1691]: E0919 22:35:53.942304    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321353941911906  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:53 ha-984158 kubelet[1691]: E0919 22:35:53.942351    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321353941911906  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:03 ha-984158 kubelet[1691]: E0919 22:36:03.943680    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321363943336068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:03 ha-984158 kubelet[1691]: E0919 22:36:03.943728    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321363943336068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:13 ha-984158 kubelet[1691]: E0919 22:36:13.944965    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321373944715242  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:13 ha-984158 kubelet[1691]: E0919 22:36:13.945002    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321373944715242  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-984158 -n ha-984158
helpers_test.go:269: (dbg) Run:  kubectl --context ha-984158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (16.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (65.93s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 node start m02 --alsologtostderr -v 5: (8.007665735s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (735.19562ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:36:31.504186   90583 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:36:31.504480   90583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:31.504492   90583 out.go:374] Setting ErrFile to fd 2...
	I0919 22:36:31.504495   90583 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:31.504698   90583 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:36:31.504870   90583 out.go:368] Setting JSON to false
	I0919 22:36:31.504889   90583 mustload.go:65] Loading cluster: ha-984158
	I0919 22:36:31.504947   90583 notify.go:220] Checking for updates...
	I0919 22:36:31.505302   90583 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:36:31.505327   90583 status.go:174] checking status of ha-984158 ...
	I0919 22:36:31.505735   90583 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:36:31.524917   90583 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:36:31.524960   90583 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:31.525254   90583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:36:31.545078   90583 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:31.545363   90583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:31.545437   90583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:36:31.565644   90583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:36:31.660748   90583 ssh_runner.go:195] Run: systemctl --version
	I0919 22:36:31.665388   90583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:31.678660   90583 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:36:31.740094   90583 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:36:31.729420375 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:36:31.740888   90583 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:31.740922   90583 api_server.go:166] Checking apiserver status ...
	I0919 22:36:31.740965   90583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:31.754039   90583 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:36:31.765038   90583 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:31.765121   90583 ssh_runner.go:195] Run: ls
	I0919 22:36:31.769085   90583 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:31.773473   90583 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:31.773496   90583 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:36:31.773505   90583 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:31.773522   90583 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:36:31.773814   90583 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:36:31.793261   90583 status.go:371] ha-984158-m02 host status = "Running" (err=<nil>)
	I0919 22:36:31.793295   90583 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:31.793532   90583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:36:31.810792   90583 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:31.811034   90583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:31.811071   90583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:36:31.829293   90583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:36:31.924501   90583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:31.937038   90583 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:31.937073   90583 api_server.go:166] Checking apiserver status ...
	I0919 22:36:31.937123   90583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:31.949147   90583 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup
	W0919 22:36:31.959363   90583 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:31.959416   90583 ssh_runner.go:195] Run: ls
	I0919 22:36:31.963444   90583 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:31.967633   90583 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:31.967656   90583 status.go:463] ha-984158-m02 apiserver status = Running (err=<nil>)
	I0919 22:36:31.967665   90583 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:31.967686   90583 status.go:174] checking status of ha-984158-m03 ...
	I0919 22:36:31.967909   90583 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:36:31.987572   90583 status.go:371] ha-984158-m03 host status = "Running" (err=<nil>)
	I0919 22:36:31.987599   90583 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:31.987855   90583 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:36:32.006505   90583 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:32.006824   90583 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:32.006875   90583 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:36:32.024769   90583 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:36:32.120124   90583 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:32.134416   90583 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:32.134448   90583 api_server.go:166] Checking apiserver status ...
	I0919 22:36:32.134488   90583 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:32.148000   90583 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W0919 22:36:32.159419   90583 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:32.159490   90583 ssh_runner.go:195] Run: ls
	I0919 22:36:32.163350   90583 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:32.167743   90583 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:32.167771   90583 status.go:463] ha-984158-m03 apiserver status = Running (err=<nil>)
	I0919 22:36:32.167784   90583 status.go:176] ha-984158-m03 status: &{Name:ha-984158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:32.167808   90583 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:36:32.168144   90583 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:36:32.187907   90583 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:36:32.187928   90583 status.go:384] host is not running, skipping remaining checks
	I0919 22:36:32.187934   90583 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:36:32.192647   18175 retry.go:31] will retry after 739.98198ms: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (730.361517ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:36:32.977750   90793 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:36:32.978003   90793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:32.978013   90793 out.go:374] Setting ErrFile to fd 2...
	I0919 22:36:32.978017   90793 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:32.978283   90793 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:36:32.978455   90793 out.go:368] Setting JSON to false
	I0919 22:36:32.978474   90793 mustload.go:65] Loading cluster: ha-984158
	I0919 22:36:32.978639   90793 notify.go:220] Checking for updates...
	I0919 22:36:32.978813   90793 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:36:32.978833   90793 status.go:174] checking status of ha-984158 ...
	I0919 22:36:32.979302   90793 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:36:33.002263   90793 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:36:33.002327   90793 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:33.002630   90793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:36:33.024522   90793 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:33.024799   90793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:33.024833   90793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:36:33.045040   90793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:36:33.140157   90793 ssh_runner.go:195] Run: systemctl --version
	I0919 22:36:33.145204   90793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:33.157403   90793 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:36:33.211402   90793 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:36:33.201510893 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:36:33.212141   90793 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:33.212176   90793 api_server.go:166] Checking apiserver status ...
	I0919 22:36:33.212221   90793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:33.224590   90793 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:36:33.234446   90793 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:33.234507   90793 ssh_runner.go:195] Run: ls
	I0919 22:36:33.238720   90793 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:33.243440   90793 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:33.243461   90793 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:36:33.243469   90793 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:33.243487   90793 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:36:33.243744   90793 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:36:33.261685   90793 status.go:371] ha-984158-m02 host status = "Running" (err=<nil>)
	I0919 22:36:33.261709   90793 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:33.261988   90793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:36:33.280566   90793 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:33.280860   90793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:33.280916   90793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:36:33.299645   90793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:36:33.394827   90793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:33.407762   90793 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:33.407792   90793 api_server.go:166] Checking apiserver status ...
	I0919 22:36:33.407831   90793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:33.420137   90793 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup
	W0919 22:36:33.430685   90793 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:33.430732   90793 ssh_runner.go:195] Run: ls
	I0919 22:36:33.436174   90793 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:33.441310   90793 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:33.441346   90793 status.go:463] ha-984158-m02 apiserver status = Running (err=<nil>)
	I0919 22:36:33.441357   90793 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:33.441377   90793 status.go:174] checking status of ha-984158-m03 ...
	I0919 22:36:33.441723   90793 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:36:33.462967   90793 status.go:371] ha-984158-m03 host status = "Running" (err=<nil>)
	I0919 22:36:33.462992   90793 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:33.463259   90793 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:36:33.481036   90793 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:33.481295   90793 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:33.481337   90793 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:36:33.501399   90793 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:36:33.595709   90793 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:33.609268   90793 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:33.609294   90793 api_server.go:166] Checking apiserver status ...
	I0919 22:36:33.609327   90793 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:33.621185   90793 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W0919 22:36:33.631076   90793 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:33.631159   90793 ssh_runner.go:195] Run: ls
	I0919 22:36:33.635463   90793 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:33.639964   90793 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:33.639984   90793 status.go:463] ha-984158-m03 apiserver status = Running (err=<nil>)
	I0919 22:36:33.639993   90793 status.go:176] ha-984158-m03 status: &{Name:ha-984158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:33.640008   90793 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:36:33.640264   90793 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:36:33.658745   90793 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:36:33.658772   90793 status.go:384] host is not running, skipping remaining checks
	I0919 22:36:33.658783   90793 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:36:33.663986   18175 retry.go:31] will retry after 867.944654ms: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (747.546427ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:36:34.575980   91018 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:36:34.576230   91018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:34.576238   91018 out.go:374] Setting ErrFile to fd 2...
	I0919 22:36:34.576242   91018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:34.576423   91018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:36:34.576593   91018 out.go:368] Setting JSON to false
	I0919 22:36:34.576615   91018 mustload.go:65] Loading cluster: ha-984158
	I0919 22:36:34.576702   91018 notify.go:220] Checking for updates...
	I0919 22:36:34.577188   91018 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:36:34.577221   91018 status.go:174] checking status of ha-984158 ...
	I0919 22:36:34.577844   91018 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:36:34.601344   91018 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:36:34.601391   91018 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:34.601709   91018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:36:34.620343   91018 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:34.620635   91018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:34.620671   91018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:36:34.639093   91018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:36:34.732550   91018 ssh_runner.go:195] Run: systemctl --version
	I0919 22:36:34.737153   91018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:34.750323   91018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:36:34.811518   91018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:36:34.798591696 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:36:34.811994   91018 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:34.812019   91018 api_server.go:166] Checking apiserver status ...
	I0919 22:36:34.812051   91018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:34.824394   91018 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:36:34.835422   91018 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:34.835484   91018 ssh_runner.go:195] Run: ls
	I0919 22:36:34.840919   91018 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:34.847239   91018 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:34.847266   91018 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:36:34.847276   91018 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:34.847297   91018 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:36:34.847542   91018 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:36:34.866744   91018 status.go:371] ha-984158-m02 host status = "Running" (err=<nil>)
	I0919 22:36:34.866785   91018 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:34.867122   91018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:36:34.890383   91018 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:34.890675   91018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:34.890721   91018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:36:34.911319   91018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:36:35.007385   91018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:35.022359   91018 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:35.022396   91018 api_server.go:166] Checking apiserver status ...
	I0919 22:36:35.022467   91018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:35.035665   91018 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup
	W0919 22:36:35.048164   91018 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:35.048222   91018 ssh_runner.go:195] Run: ls
	I0919 22:36:35.052659   91018 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:35.057946   91018 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:35.057970   91018 status.go:463] ha-984158-m02 apiserver status = Running (err=<nil>)
	I0919 22:36:35.057979   91018 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:35.057994   91018 status.go:174] checking status of ha-984158-m03 ...
	I0919 22:36:35.058314   91018 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:36:35.077521   91018 status.go:371] ha-984158-m03 host status = "Running" (err=<nil>)
	I0919 22:36:35.077549   91018 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:35.077849   91018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:36:35.098503   91018 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:35.098824   91018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:35.098870   91018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:36:35.117914   91018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:36:35.211675   91018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:35.223936   91018 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:35.223971   91018 api_server.go:166] Checking apiserver status ...
	I0919 22:36:35.224033   91018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:35.235901   91018 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W0919 22:36:35.247898   91018 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:35.247953   91018 ssh_runner.go:195] Run: ls
	I0919 22:36:35.251963   91018 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:35.256249   91018 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:35.256281   91018 status.go:463] ha-984158-m03 apiserver status = Running (err=<nil>)
	I0919 22:36:35.256290   91018 status.go:176] ha-984158-m03 status: &{Name:ha-984158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:35.256304   91018 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:36:35.256541   91018 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:36:35.275819   91018 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:36:35.275843   91018 status.go:384] host is not running, skipping remaining checks
	I0919 22:36:35.275848   91018 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:36:35.280748   18175 retry.go:31] will retry after 1.977611493s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (774.215127ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:36:37.309026   91230 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:36:37.309334   91230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:37.309359   91230 out.go:374] Setting ErrFile to fd 2...
	I0919 22:36:37.309366   91230 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:37.309654   91230 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:36:37.309877   91230 out.go:368] Setting JSON to false
	I0919 22:36:37.309901   91230 mustload.go:65] Loading cluster: ha-984158
	I0919 22:36:37.309977   91230 notify.go:220] Checking for updates...
	I0919 22:36:37.310448   91230 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:36:37.310474   91230 status.go:174] checking status of ha-984158 ...
	I0919 22:36:37.310952   91230 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:36:37.330333   91230 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:36:37.330364   91230 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:37.330657   91230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:36:37.355884   91230 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:37.356212   91230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:37.356257   91230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:36:37.375263   91230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:36:37.471982   91230 ssh_runner.go:195] Run: systemctl --version
	I0919 22:36:37.476875   91230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:37.490905   91230 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:36:37.553647   91230 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:36:37.540766796 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:36:37.554152   91230 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:37.554176   91230 api_server.go:166] Checking apiserver status ...
	I0919 22:36:37.554207   91230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:37.566628   91230 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:36:37.577257   91230 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:37.577314   91230 ssh_runner.go:195] Run: ls
	I0919 22:36:37.581548   91230 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:37.588019   91230 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:37.588082   91230 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:36:37.588126   91230 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:37.588153   91230 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:36:37.588514   91230 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:36:37.612158   91230 status.go:371] ha-984158-m02 host status = "Running" (err=<nil>)
	I0919 22:36:37.612189   91230 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:37.613262   91230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:36:37.632713   91230 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:37.632983   91230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:37.633016   91230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:36:37.655817   91230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:36:37.751911   91230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:37.764864   91230 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:37.764888   91230 api_server.go:166] Checking apiserver status ...
	I0919 22:36:37.764919   91230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:37.777112   91230 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup
	W0919 22:36:37.789132   91230 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:37.789191   91230 ssh_runner.go:195] Run: ls
	I0919 22:36:37.793922   91230 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:37.798786   91230 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:37.798814   91230 status.go:463] ha-984158-m02 apiserver status = Running (err=<nil>)
	I0919 22:36:37.798824   91230 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:37.798840   91230 status.go:174] checking status of ha-984158-m03 ...
	I0919 22:36:37.799139   91230 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:36:37.819941   91230 status.go:371] ha-984158-m03 host status = "Running" (err=<nil>)
	I0919 22:36:37.819970   91230 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:37.820313   91230 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:36:37.846691   91230 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:37.847008   91230 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:37.847062   91230 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:36:37.866228   91230 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:36:37.965004   91230 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:37.978622   91230 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:37.978649   91230 api_server.go:166] Checking apiserver status ...
	I0919 22:36:37.978682   91230 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:37.991624   91230 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W0919 22:36:38.002725   91230 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:38.002780   91230 ssh_runner.go:195] Run: ls
	I0919 22:36:38.006881   91230 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:38.011278   91230 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:38.011302   91230 status.go:463] ha-984158-m03 apiserver status = Running (err=<nil>)
	I0919 22:36:38.011313   91230 status.go:176] ha-984158-m03 status: &{Name:ha-984158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:38.011331   91230 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:36:38.011578   91230 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:36:38.029709   91230 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:36:38.029732   91230 status.go:384] host is not running, skipping remaining checks
	I0919 22:36:38.029738   91230 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:36:38.034656   18175 retry.go:31] will retry after 2.369173831s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (788.258368ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:36:40.452285   91460 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:36:40.452386   91460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:40.452391   91460 out.go:374] Setting ErrFile to fd 2...
	I0919 22:36:40.452396   91460 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:40.452585   91460 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:36:40.452768   91460 out.go:368] Setting JSON to false
	I0919 22:36:40.452790   91460 mustload.go:65] Loading cluster: ha-984158
	I0919 22:36:40.452932   91460 notify.go:220] Checking for updates...
	I0919 22:36:40.453266   91460 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:36:40.453295   91460 status.go:174] checking status of ha-984158 ...
	I0919 22:36:40.453741   91460 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:36:40.476302   91460 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:36:40.476346   91460 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:40.476687   91460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:36:40.497354   91460 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:40.497783   91460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:40.497906   91460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:36:40.518246   91460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:36:40.613662   91460 ssh_runner.go:195] Run: systemctl --version
	I0919 22:36:40.618665   91460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:40.630984   91460 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:36:40.695994   91460 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:36:40.684061171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:36:40.696598   91460 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:40.696628   91460 api_server.go:166] Checking apiserver status ...
	I0919 22:36:40.696661   91460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:40.711007   91460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:36:40.721501   91460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:40.721559   91460 ssh_runner.go:195] Run: ls
	I0919 22:36:40.725461   91460 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:40.731192   91460 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:40.731215   91460 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:36:40.731225   91460 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:40.731245   91460 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:36:40.731482   91460 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:36:40.757493   91460 status.go:371] ha-984158-m02 host status = "Running" (err=<nil>)
	I0919 22:36:40.757518   91460 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:40.757798   91460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:36:40.778065   91460 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:40.778393   91460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:40.778440   91460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:36:40.800172   91460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:36:40.898294   91460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:40.916586   91460 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:40.916621   91460 api_server.go:166] Checking apiserver status ...
	I0919 22:36:40.916663   91460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:40.928797   91460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup
	W0919 22:36:40.941773   91460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:40.941833   91460 ssh_runner.go:195] Run: ls
	I0919 22:36:40.947658   91460 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:40.952398   91460 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:40.952426   91460 status.go:463] ha-984158-m02 apiserver status = Running (err=<nil>)
	I0919 22:36:40.952435   91460 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:40.952454   91460 status.go:174] checking status of ha-984158-m03 ...
	I0919 22:36:40.952688   91460 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:36:40.972659   91460 status.go:371] ha-984158-m03 host status = "Running" (err=<nil>)
	I0919 22:36:40.972700   91460 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:40.973024   91460 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:36:40.996523   91460 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:40.996963   91460 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:40.997050   91460 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:36:41.023520   91460 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:36:41.119415   91460 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:41.131984   91460 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:41.132009   91460 api_server.go:166] Checking apiserver status ...
	I0919 22:36:41.132064   91460 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:41.148410   91460 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W0919 22:36:41.160022   91460 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:41.160091   91460 ssh_runner.go:195] Run: ls
	I0919 22:36:41.164542   91460 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:41.168971   91460 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:41.168996   91460 status.go:463] ha-984158-m03 apiserver status = Running (err=<nil>)
	I0919 22:36:41.169013   91460 status.go:176] ha-984158-m03 status: &{Name:ha-984158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:41.169032   91460 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:36:41.169403   91460 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:36:41.188896   91460 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:36:41.188933   91460 status.go:384] host is not running, skipping remaining checks
	I0919 22:36:41.188943   91460 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:36:41.194369   18175 retry.go:31] will retry after 5.753980599s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (736.433597ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:36:46.990692   91691 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:36:46.991149   91691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:46.991210   91691 out.go:374] Setting ErrFile to fd 2...
	I0919 22:36:46.991221   91691 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:46.991830   91691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:36:46.992571   91691 out.go:368] Setting JSON to false
	I0919 22:36:46.992604   91691 mustload.go:65] Loading cluster: ha-984158
	I0919 22:36:46.992666   91691 notify.go:220] Checking for updates...
	I0919 22:36:46.993258   91691 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:36:46.993301   91691 status.go:174] checking status of ha-984158 ...
	I0919 22:36:46.994068   91691 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:36:47.013736   91691 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:36:47.013771   91691 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:47.013995   91691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:36:47.032277   91691 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:47.032520   91691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:47.032560   91691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:36:47.054029   91691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:36:47.148435   91691 ssh_runner.go:195] Run: systemctl --version
	I0919 22:36:47.153063   91691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:47.165063   91691 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:36:47.221342   91691 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:36:47.211076427 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:36:47.221901   91691 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:47.221930   91691 api_server.go:166] Checking apiserver status ...
	I0919 22:36:47.221971   91691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:47.235265   91691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:36:47.247380   91691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:47.247453   91691 ssh_runner.go:195] Run: ls
	I0919 22:36:47.252068   91691 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:47.256418   91691 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:47.256442   91691 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:36:47.256451   91691 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:47.256466   91691 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:36:47.256734   91691 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:36:47.275537   91691 status.go:371] ha-984158-m02 host status = "Running" (err=<nil>)
	I0919 22:36:47.275566   91691 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:47.275797   91691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:36:47.295655   91691 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:47.295962   91691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:47.295998   91691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:36:47.315239   91691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:36:47.411433   91691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:47.424263   91691 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:47.424303   91691 api_server.go:166] Checking apiserver status ...
	I0919 22:36:47.424347   91691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:47.438001   91691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup
	W0919 22:36:47.449280   91691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:47.449332   91691 ssh_runner.go:195] Run: ls
	I0919 22:36:47.453188   91691 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:47.457302   91691 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:47.457330   91691 status.go:463] ha-984158-m02 apiserver status = Running (err=<nil>)
	I0919 22:36:47.457346   91691 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:47.457373   91691 status.go:174] checking status of ha-984158-m03 ...
	I0919 22:36:47.457672   91691 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:36:47.477379   91691 status.go:371] ha-984158-m03 host status = "Running" (err=<nil>)
	I0919 22:36:47.477405   91691 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:47.477659   91691 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:36:47.497679   91691 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:47.497967   91691 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:47.498011   91691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:36:47.516980   91691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:36:47.612492   91691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:47.625742   91691 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:47.625773   91691 api_server.go:166] Checking apiserver status ...
	I0919 22:36:47.625815   91691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:47.638304   91691 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W0919 22:36:47.651067   91691 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:47.651169   91691 ssh_runner.go:195] Run: ls
	I0919 22:36:47.655347   91691 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:47.659784   91691 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:47.659810   91691 status.go:463] ha-984158-m03 apiserver status = Running (err=<nil>)
	I0919 22:36:47.659821   91691 status.go:176] ha-984158-m03 status: &{Name:ha-984158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:47.659843   91691 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:36:47.660177   91691 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:36:47.679276   91691 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:36:47.679303   91691 status.go:384] host is not running, skipping remaining checks
	I0919 22:36:47.679310   91691 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:36:47.685891   18175 retry.go:31] will retry after 7.01129405s: exit status 7
E0919 22:36:52.324501   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (732.422013ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:36:54.742868   91942 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:36:54.742989   91942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:54.742995   91942 out.go:374] Setting ErrFile to fd 2...
	I0919 22:36:54.743000   91942 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:36:54.743261   91942 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:36:54.743474   91942 out.go:368] Setting JSON to false
	I0919 22:36:54.743502   91942 mustload.go:65] Loading cluster: ha-984158
	I0919 22:36:54.743579   91942 notify.go:220] Checking for updates...
	I0919 22:36:54.743880   91942 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:36:54.743902   91942 status.go:174] checking status of ha-984158 ...
	I0919 22:36:54.744375   91942 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:36:54.765270   91942 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:36:54.765304   91942 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:54.765554   91942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:36:54.784615   91942 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:36:54.784857   91942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:54.784902   91942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:36:54.803895   91942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:36:54.899988   91942 ssh_runner.go:195] Run: systemctl --version
	I0919 22:36:54.905285   91942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:54.917847   91942 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:36:54.973935   91942 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:36:54.963778878 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:36:54.974484   91942 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:54.974516   91942 api_server.go:166] Checking apiserver status ...
	I0919 22:36:54.974558   91942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:54.988998   91942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:36:55.003695   91942 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:55.003750   91942 ssh_runner.go:195] Run: ls
	I0919 22:36:55.007588   91942 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:55.013035   91942 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:55.013056   91942 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:36:55.013066   91942 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:55.013080   91942 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:36:55.013352   91942 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:36:55.032805   91942 status.go:371] ha-984158-m02 host status = "Running" (err=<nil>)
	I0919 22:36:55.032833   91942 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:55.033180   91942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:36:55.052497   91942 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:36:55.052840   91942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:55.052887   91942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:36:55.071898   91942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:36:55.166780   91942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:55.180508   91942 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:55.180533   91942 api_server.go:166] Checking apiserver status ...
	I0919 22:36:55.180570   91942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:55.193482   91942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup
	W0919 22:36:55.205018   91942 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:55.205069   91942 ssh_runner.go:195] Run: ls
	I0919 22:36:55.209075   91942 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:55.213803   91942 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:55.213833   91942 status.go:463] ha-984158-m02 apiserver status = Running (err=<nil>)
	I0919 22:36:55.213844   91942 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:55.213866   91942 status.go:174] checking status of ha-984158-m03 ...
	I0919 22:36:55.214251   91942 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:36:55.232868   91942 status.go:371] ha-984158-m03 host status = "Running" (err=<nil>)
	I0919 22:36:55.232892   91942 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:55.233157   91942 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:36:55.251897   91942 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:36:55.252214   91942 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:36:55.252272   91942 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:36:55.270480   91942 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:36:55.364421   91942 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:36:55.376426   91942 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:36:55.376451   91942 api_server.go:166] Checking apiserver status ...
	I0919 22:36:55.376482   91942 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:36:55.387697   91942 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W0919 22:36:55.399208   91942 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:36:55.399269   91942 ssh_runner.go:195] Run: ls
	I0919 22:36:55.403650   91942 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:36:55.408180   91942 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:36:55.408207   91942 status.go:463] ha-984158-m03 apiserver status = Running (err=<nil>)
	I0919 22:36:55.408217   91942 status.go:176] ha-984158-m03 status: &{Name:ha-984158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:36:55.408231   91942 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:36:55.408464   91942 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:36:55.427371   91942 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:36:55.427404   91942 status.go:384] host is not running, skipping remaining checks
	I0919 22:36:55.427412   91942 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:36:55.432256   18175 retry.go:31] will retry after 6.941376904s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (736.0538ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:37:02.417459   92177 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:37:02.417742   92177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:37:02.417753   92177 out.go:374] Setting ErrFile to fd 2...
	I0919 22:37:02.417759   92177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:37:02.417978   92177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:37:02.418197   92177 out.go:368] Setting JSON to false
	I0919 22:37:02.418221   92177 mustload.go:65] Loading cluster: ha-984158
	I0919 22:37:02.418291   92177 notify.go:220] Checking for updates...
	I0919 22:37:02.418641   92177 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:37:02.418669   92177 status.go:174] checking status of ha-984158 ...
	I0919 22:37:02.419149   92177 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:37:02.439507   92177 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:37:02.439536   92177 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:37:02.439820   92177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:37:02.461182   92177 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:37:02.461478   92177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:37:02.461535   92177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:37:02.482241   92177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:37:02.578879   92177 ssh_runner.go:195] Run: systemctl --version
	I0919 22:37:02.583601   92177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:37:02.597342   92177 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:37:02.654903   92177 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:37:02.644258581 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:37:02.655684   92177 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:37:02.655718   92177 api_server.go:166] Checking apiserver status ...
	I0919 22:37:02.655781   92177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:37:02.668444   92177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:37:02.679389   92177 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:37:02.679443   92177 ssh_runner.go:195] Run: ls
	I0919 22:37:02.684123   92177 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:37:02.690920   92177 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:37:02.690952   92177 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:37:02.690962   92177 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:37:02.690978   92177 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:37:02.691290   92177 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:37:02.710902   92177 status.go:371] ha-984158-m02 host status = "Running" (err=<nil>)
	I0919 22:37:02.710927   92177 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:37:02.711246   92177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:37:02.730277   92177 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:37:02.730607   92177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:37:02.730654   92177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:37:02.749038   92177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:37:02.843466   92177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:37:02.856786   92177 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:37:02.856813   92177 api_server.go:166] Checking apiserver status ...
	I0919 22:37:02.856845   92177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:37:02.870921   92177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup
	W0919 22:37:02.881760   92177 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:37:02.881838   92177 ssh_runner.go:195] Run: ls
	I0919 22:37:02.886021   92177 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:37:02.890302   92177 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:37:02.890332   92177 status.go:463] ha-984158-m02 apiserver status = Running (err=<nil>)
	I0919 22:37:02.890339   92177 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:37:02.890353   92177 status.go:174] checking status of ha-984158-m03 ...
	I0919 22:37:02.890579   92177 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:37:02.909592   92177 status.go:371] ha-984158-m03 host status = "Running" (err=<nil>)
	I0919 22:37:02.909617   92177 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:37:02.909936   92177 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:37:02.930010   92177 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:37:02.930420   92177 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:37:02.930472   92177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:37:02.949236   92177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:37:03.043830   92177 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:37:03.056930   92177 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:37:03.056961   92177 api_server.go:166] Checking apiserver status ...
	I0919 22:37:03.057002   92177 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:37:03.068862   92177 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W0919 22:37:03.079441   92177 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:37:03.079494   92177 ssh_runner.go:195] Run: ls
	I0919 22:37:03.083504   92177 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:37:03.087779   92177 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:37:03.087816   92177 status.go:463] ha-984158-m03 apiserver status = Running (err=<nil>)
	I0919 22:37:03.087829   92177 status.go:176] ha-984158-m03 status: &{Name:ha-984158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:37:03.087848   92177 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:37:03.088128   92177 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:37:03.106341   92177 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:37:03.106369   92177 status.go:384] host is not running, skipping remaining checks
	I0919 22:37:03.106377   92177 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
I0919 22:37:03.111298   18175 retry.go:31] will retry after 23.452046511s: exit status 7
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (728.203611ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:37:26.612903   92476 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:37:26.613311   92476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:37:26.613325   92476 out.go:374] Setting ErrFile to fd 2...
	I0919 22:37:26.613329   92476 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:37:26.613530   92476 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:37:26.613721   92476 out.go:368] Setting JSON to false
	I0919 22:37:26.613743   92476 mustload.go:65] Loading cluster: ha-984158
	I0919 22:37:26.613894   92476 notify.go:220] Checking for updates...
	I0919 22:37:26.614156   92476 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:37:26.614181   92476 status.go:174] checking status of ha-984158 ...
	I0919 22:37:26.614668   92476 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:37:26.635254   92476 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:37:26.635296   92476 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:37:26.635575   92476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:37:26.654341   92476 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:37:26.654584   92476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:37:26.654622   92476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:37:26.672542   92476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:37:26.768009   92476 ssh_runner.go:195] Run: systemctl --version
	I0919 22:37:26.772753   92476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:37:26.784924   92476 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:37:26.842198   92476 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-09-19 22:37:26.831031853 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:37:26.842729   92476 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:37:26.842756   92476 api_server.go:166] Checking apiserver status ...
	I0919 22:37:26.842785   92476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:37:26.854559   92476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup
	W0919 22:37:26.864734   92476 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1463/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:37:26.864910   92476 ssh_runner.go:195] Run: ls
	I0919 22:37:26.869468   92476 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:37:26.874010   92476 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:37:26.874035   92476 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:37:26.874044   92476 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:37:26.874059   92476 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:37:26.874331   92476 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:37:26.892921   92476 status.go:371] ha-984158-m02 host status = "Running" (err=<nil>)
	I0919 22:37:26.892950   92476 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:37:26.893258   92476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:37:26.911176   92476 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:37:26.911446   92476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:37:26.911496   92476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:37:26.929127   92476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32808 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:37:27.021412   92476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:37:27.044122   92476 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:37:27.044155   92476 api_server.go:166] Checking apiserver status ...
	I0919 22:37:27.044189   92476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:37:27.055916   92476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup
	W0919 22:37:27.066905   92476 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/410/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:37:27.066977   92476 ssh_runner.go:195] Run: ls
	I0919 22:37:27.070919   92476 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:37:27.075190   92476 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:37:27.075219   92476 status.go:463] ha-984158-m02 apiserver status = Running (err=<nil>)
	I0919 22:37:27.075227   92476 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:37:27.075246   92476 status.go:174] checking status of ha-984158-m03 ...
	I0919 22:37:27.075543   92476 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:37:27.095780   92476 status.go:371] ha-984158-m03 host status = "Running" (err=<nil>)
	I0919 22:37:27.095804   92476 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:37:27.096090   92476 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:37:27.115677   92476 host.go:66] Checking if "ha-984158-m03" exists ...
	I0919 22:37:27.115965   92476 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:37:27.116010   92476 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:37:27.134787   92476 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:37:27.228704   92476 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:37:27.241760   92476 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:37:27.241787   92476 api_server.go:166] Checking apiserver status ...
	I0919 22:37:27.241826   92476 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:37:27.253236   92476 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup
	W0919 22:37:27.263868   92476 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1320/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:37:27.263915   92476 ssh_runner.go:195] Run: ls
	I0919 22:37:27.267814   92476 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:37:27.272026   92476 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:37:27.272050   92476 status.go:463] ha-984158-m03 apiserver status = Running (err=<nil>)
	I0919 22:37:27.272062   92476 status.go:176] ha-984158-m03 status: &{Name:ha-984158-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:37:27.272081   92476 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:37:27.272342   92476 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:37:27.291476   92476 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:37:27.291499   92476 status.go:384] host is not running, skipping remaining checks
	I0919 22:37:27.291504   92476 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:434: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-984158
helpers_test.go:243: (dbg) docker inspect ha-984158:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	        "Created": "2025-09-19T22:33:24.996172492Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 68186,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:33:25.030742493Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hosts",
	        "LogPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca-json.log",
	        "Name": "/ha-984158",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-984158:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-984158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	                "LowerDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "ha-984158",
	                "Source": "/var/lib/docker/volumes/ha-984158/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-984158",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-984158",
	                "name.minikube.sigs.k8s.io": "ha-984158",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "b35e3615d35b58bcec7825bb039821b1dfb6293e56fe1316d0ae491d5b3b0558",
	            "SandboxKey": "/var/run/docker/netns/b35e3615d35b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32783"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32784"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32787"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-984158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "5a:4d:99:af:3d:e2",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1b6c79ac61dbabfd8f1ce8959ab9a2616212ddaf4680b1bb2cc7b6f6005d0e",
	                    "EndpointID": "150c15de67a84040f10d82e99ed82c2230b34908474820017c5633e8a5513d79",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-984158",
	                        "0e7c4b5cff2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-984158 -n ha-984158
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 logs -n 25: (1.236867419s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:35 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m03_ha-984158.txt                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:35 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158.txt                                                 │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp testdata/cp-test.txt ha-984158-m04:/home/docker/cp-test.txt                                                             │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m04.txt │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m04_ha-984158.txt                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158.txt                                                 │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ node    │ ha-984158 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ node    │ ha-984158 node start m02 --alsologtostderr -v 5                                                                                      │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:33:19
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:33:19.901060   67622 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:33:19.901185   67622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:19.901193   67622 out.go:374] Setting ErrFile to fd 2...
	I0919 22:33:19.901198   67622 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:33:19.901448   67622 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:33:19.902017   67622 out.go:368] Setting JSON to false
	I0919 22:33:19.903166   67622 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4550,"bootTime":1758316650,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:33:19.903283   67622 start.go:140] virtualization: kvm guest
	I0919 22:33:19.906578   67622 out.go:179] * [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:33:19.908489   67622 notify.go:220] Checking for updates...
	I0919 22:33:19.908508   67622 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:33:19.910361   67622 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:33:19.912958   67622 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:33:19.914823   67622 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:33:19.919772   67622 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:33:19.921444   67622 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:33:19.923242   67622 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:33:19.947549   67622 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:33:19.947649   67622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:20.004707   67622 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:33:19.994191177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:20.004832   67622 docker.go:318] overlay module found
	I0919 22:33:20.006907   67622 out.go:179] * Using the docker driver based on user configuration
	I0919 22:33:20.008195   67622 start.go:304] selected driver: docker
	I0919 22:33:20.008214   67622 start.go:918] validating driver "docker" against <nil>
	I0919 22:33:20.008227   67622 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:33:20.008818   67622 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:33:20.067697   67622 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:24 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:33:20.055128215 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:33:20.067871   67622 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:33:20.068167   67622 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:33:20.070129   67622 out.go:179] * Using Docker driver with root privileges
	I0919 22:33:20.071439   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:20.071513   67622 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0919 22:33:20.071523   67622 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:33:20.071600   67622 start.go:348] cluster config:
	{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:c
ni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:20.073188   67622 out.go:179] * Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	I0919 22:33:20.074628   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:33:20.076439   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:33:20.078066   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:20.078159   67622 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:33:20.078159   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:33:20.078174   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:33:20.078333   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:33:20.078348   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:33:20.078744   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:20.078777   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json: {Name:mk745b6092cc48756321ca371e559184d12db2be Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:20.100036   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:33:20.100059   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:33:20.100081   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:33:20.100133   67622 start.go:360] acquireMachinesLock for ha-984158: {Name:mkc72a6d4fef468a73a10e88f019b77c34dadd97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:33:20.100248   67622 start.go:364] duration metric: took 93.303µs to acquireMachinesLock for "ha-984158"
	I0919 22:33:20.100277   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APISer
verIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMne
tPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:20.100380   67622 start.go:125] createHost starting for "" (driver="docker")
	I0919 22:33:20.103382   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:33:20.103623   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:33:20.103675   67622 client.go:168] LocalClient.Create starting
	I0919 22:33:20.103751   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:33:20.103785   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:20.103799   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:20.103860   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:33:20.103880   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:20.103895   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:20.104259   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 22:33:20.122340   67622 cli_runner.go:211] docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 22:33:20.122418   67622 network_create.go:284] running [docker network inspect ha-984158] to gather additional debugging logs...
	I0919 22:33:20.122455   67622 cli_runner.go:164] Run: docker network inspect ha-984158
	W0919 22:33:20.139578   67622 cli_runner.go:211] docker network inspect ha-984158 returned with exit code 1
	I0919 22:33:20.139605   67622 network_create.go:287] error running [docker network inspect ha-984158]: docker network inspect ha-984158: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network ha-984158 not found
	I0919 22:33:20.139623   67622 network_create.go:289] output of [docker network inspect ha-984158]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network ha-984158 not found
	
	** /stderr **
	I0919 22:33:20.139738   67622 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:20.159001   67622 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b807f0}
	I0919 22:33:20.159067   67622 network_create.go:124] attempt to create docker network ha-984158 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0919 22:33:20.159151   67622 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=ha-984158 ha-984158
	I0919 22:33:20.220465   67622 network_create.go:108] docker network ha-984158 192.168.49.0/24 created
	I0919 22:33:20.220505   67622 kic.go:121] calculated static IP "192.168.49.2" for the "ha-984158" container
	I0919 22:33:20.220576   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:33:20.238299   67622 cli_runner.go:164] Run: docker volume create ha-984158 --label name.minikube.sigs.k8s.io=ha-984158 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:33:20.257860   67622 oci.go:103] Successfully created a docker volume ha-984158
	I0919 22:33:20.258049   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158 --entrypoint /usr/bin/test -v ha-984158:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:33:20.650160   67622 oci.go:107] Successfully prepared a docker volume ha-984158
	I0919 22:33:20.650207   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:20.650234   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:33:20.650319   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:33:24.923696   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.273335756s)
	I0919 22:33:24.923745   67622 kic.go:203] duration metric: took 4.273508289s to extract preloaded images to volume ...
	W0919 22:33:24.923837   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:33:24.923868   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:33:24.923905   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:33:24.980440   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158 --name ha-984158 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158 --network ha-984158 --ip 192.168.49.2 --volume ha-984158:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:33:25.243904   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Running}}
	I0919 22:33:25.262964   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:25.282632   67622 cli_runner.go:164] Run: docker exec ha-984158 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:33:25.335702   67622 oci.go:144] the created container "ha-984158" has a running status.
	I0919 22:33:25.335743   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa...
	I0919 22:33:26.151425   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:33:26.151477   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:33:26.176792   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:26.194873   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:33:26.194911   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:33:26.242371   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:26.260832   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:33:26.260926   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.280776   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.281060   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.281074   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:33:26.419419   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:33:26.419451   67622 ubuntu.go:182] provisioning hostname "ha-984158"
	I0919 22:33:26.419523   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.438011   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.438316   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.438334   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158 && echo "ha-984158" | sudo tee /etc/hostname
	I0919 22:33:26.587806   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:33:26.587878   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:26.606861   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:26.607093   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:26.607134   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:33:26.743969   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:33:26.744008   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:33:26.744055   67622 ubuntu.go:190] setting up certificates
	I0919 22:33:26.744068   67622 provision.go:84] configureAuth start
	I0919 22:33:26.744152   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:26.765302   67622 provision.go:143] copyHostCerts
	I0919 22:33:26.765368   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:26.765405   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:33:26.765414   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:26.765489   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:33:26.765575   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:26.765596   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:33:26.765600   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:26.765626   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:33:26.765682   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:26.765696   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:33:26.765702   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:26.765725   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:33:26.765773   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158 san=[127.0.0.1 192.168.49.2 ha-984158 localhost minikube]
	I0919 22:33:27.052522   67622 provision.go:177] copyRemoteCerts
	I0919 22:33:27.052586   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:33:27.052619   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.077750   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.179645   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:33:27.179718   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:33:27.210288   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:33:27.210351   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:33:27.238586   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:33:27.238648   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 22:33:27.264405   67622 provision.go:87] duration metric: took 520.31998ms to configureAuth
	I0919 22:33:27.264432   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:33:27.264630   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:27.264744   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.284923   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:27.285168   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32783 <nil> <nil>}
	I0919 22:33:27.285188   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:33:27.533206   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:33:27.533232   67622 machine.go:96] duration metric: took 1.272377771s to provisionDockerMachine
	I0919 22:33:27.533245   67622 client.go:171] duration metric: took 7.429561262s to LocalClient.Create
	I0919 22:33:27.533269   67622 start.go:167] duration metric: took 7.429646395s to libmachine.API.Create "ha-984158"
	I0919 22:33:27.533281   67622 start.go:293] postStartSetup for "ha-984158" (driver="docker")
	I0919 22:33:27.533292   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:33:27.533378   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:33:27.533430   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.551574   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.651298   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:33:27.655006   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:33:27.655037   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:33:27.655045   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:33:27.655051   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:33:27.655070   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:33:27.655147   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:33:27.655229   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:33:27.655238   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:33:27.655339   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:33:27.664695   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:27.695230   67622 start.go:296] duration metric: took 161.927495ms for postStartSetup
	I0919 22:33:27.695585   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:27.713847   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:27.714141   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:27.714182   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.735921   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.829368   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:33:27.833923   67622 start.go:128] duration metric: took 7.733528511s to createHost
	I0919 22:33:27.833953   67622 start.go:83] releasing machines lock for "ha-984158", held for 7.733693746s
	I0919 22:33:27.834022   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:33:27.851363   67622 ssh_runner.go:195] Run: cat /version.json
	I0919 22:33:27.851382   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:33:27.851422   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.851435   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:27.870773   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:27.871172   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:28.037834   67622 ssh_runner.go:195] Run: systemctl --version
	I0919 22:33:28.042707   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:33:28.184533   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:33:28.189494   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:28.213778   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:33:28.213869   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:28.245273   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:33:28.245311   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:33:28.245342   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:33:28.245409   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:33:28.260712   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:33:28.273221   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:33:28.273285   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:33:28.287690   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:33:28.303163   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:33:28.371756   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:33:28.449427   67622 docker.go:234] disabling docker service ...
	I0919 22:33:28.449499   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:33:28.467447   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:33:28.481298   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:33:28.558342   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:33:28.661953   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:33:28.675151   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:33:28.695465   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:33:28.695540   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.709844   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:33:28.709908   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.720817   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.731627   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.742506   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:33:28.753955   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.765830   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.784178   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:28.795285   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:33:28.804935   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:33:28.814326   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:28.918546   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:33:29.014541   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:33:29.014608   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:33:29.018746   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:33:29.018808   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:33:29.023643   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:33:29.059951   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:33:29.060029   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:29.098887   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:29.139500   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:33:29.141059   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:29.158455   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:33:29.162464   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:29.175140   67622 kubeadm.go:875] updating cluster {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Soc
ketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:33:29.175280   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:29.175333   67622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:33:29.248936   67622 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:33:29.248961   67622 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:33:29.249018   67622 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:33:29.287448   67622 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:33:29.287472   67622 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:33:29.287479   67622 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:33:29.287577   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:33:29.287645   67622 ssh_runner.go:195] Run: crio config
	I0919 22:33:29.333242   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:29.333266   67622 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:33:29.333277   67622 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:33:29.333307   67622 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-984158 NodeName:ha-984158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:33:29.333435   67622 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-984158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:33:29.333463   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:33:29.333506   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:33:29.346933   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:29.347143   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:33:29.347207   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:33:29.356691   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:33:29.356785   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:33:29.366595   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0919 22:33:29.386942   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:33:29.409639   67622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0919 22:33:29.428838   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1364 bytes)
	I0919 22:33:29.449681   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:33:29.453679   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:29.465645   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:29.534315   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:33:29.558739   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.2
	I0919 22:33:29.558767   67622 certs.go:194] generating shared ca certs ...
	I0919 22:33:29.558787   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:29.558925   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:33:29.558985   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:33:29.559000   67622 certs.go:256] generating profile certs ...
	I0919 22:33:29.559069   67622 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:33:29.559085   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt with IP's: []
	I0919 22:33:30.287530   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt ...
	I0919 22:33:30.287574   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt: {Name:mk4722cc3499628a90845973a8533bb2f9abaeaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.287824   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key ...
	I0919 22:33:30.287842   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key: {Name:mk95f513fb24356a441cd3443b0c241a35c61186 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.287965   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f
	I0919 22:33:30.287986   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.254]
	I0919 22:33:30.489410   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f ...
	I0919 22:33:30.489443   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f: {Name:mk50e3acb42d56649151d2b237558cdb8ee1e1f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.489635   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f ...
	I0919 22:33:30.489654   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f: {Name:mke306934752782de0837143fc2872d74f6e5eee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.489765   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.aeed9d8f -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:33:30.489897   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.aeed9d8f -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:33:30.489990   67622 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:33:30.490013   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt with IP's: []
	I0919 22:33:30.692692   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt ...
	I0919 22:33:30.692725   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt: {Name:mkec855f3fc5cc887af952272036f6b6db6c122d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.692913   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key ...
	I0919 22:33:30.692929   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key: {Name:mk41b934f9d330e25cbaab5814efeb52422665b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:30.693033   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:33:30.693058   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:33:30.693082   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:33:30.693113   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:33:30.693131   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:33:30.693163   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:33:30.693182   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:33:30.693202   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:33:30.693280   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:33:30.693327   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:33:30.693343   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:33:30.693379   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:33:30.693413   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:33:30.693444   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:33:30.693498   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:30.693554   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:33:30.693575   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:30.693594   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:33:30.694169   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:33:30.721034   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:33:30.747256   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:33:30.773231   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:33:30.799758   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0919 22:33:30.825801   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:33:30.852404   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:33:30.879195   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:33:30.905339   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:33:30.934694   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:33:30.960677   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:33:30.987763   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:33:31.008052   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:33:31.014839   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:33:31.025609   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.029511   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.029570   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:33:31.036708   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:33:31.047387   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:33:31.058096   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.062519   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.062579   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:31.070083   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:33:31.080599   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:33:31.091228   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.095407   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.095480   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:33:31.102644   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:33:31.114044   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:33:31.118226   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:33:31.118374   67622 kubeadm.go:392] StartCluster: {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:31.118467   67622 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:33:31.118521   67622 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:33:31.155950   67622 cri.go:89] found id: ""
	I0919 22:33:31.156024   67622 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:33:31.166037   67622 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 22:33:31.175817   67622 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 22:33:31.175867   67622 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 22:33:31.185690   67622 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 22:33:31.185707   67622 kubeadm.go:157] found existing configuration files:
	
	I0919 22:33:31.185748   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 22:33:31.195069   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 22:33:31.195184   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 22:33:31.204614   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 22:33:31.216208   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 22:33:31.216271   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 22:33:31.226344   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 22:33:31.239080   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 22:33:31.239168   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 22:33:31.248993   67622 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 22:33:31.258113   67622 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 22:33:31.258175   67622 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 22:33:31.267147   67622 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 22:33:31.307922   67622 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 22:33:31.308018   67622 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 22:33:31.323647   67622 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 22:33:31.323774   67622 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 22:33:31.323839   67622 kubeadm.go:310] OS: Linux
	I0919 22:33:31.323926   67622 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 22:33:31.324015   67622 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 22:33:31.324149   67622 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 22:33:31.324222   67622 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 22:33:31.324293   67622 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 22:33:31.324356   67622 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 22:33:31.324417   67622 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 22:33:31.324484   67622 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 22:33:31.377266   67622 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 22:33:31.377414   67622 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 22:33:31.377573   67622 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 22:33:31.384351   67622 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 22:33:31.386660   67622 out.go:252]   - Generating certificates and keys ...
	I0919 22:33:31.386732   67622 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 22:33:31.386811   67622 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 22:33:31.789403   67622 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 22:33:31.939575   67622 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 22:33:32.401218   67622 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 22:33:32.595052   67622 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 22:33:33.118331   67622 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 22:33:33.118543   67622 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-984158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:33:34.059417   67622 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 22:33:34.059600   67622 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-984158 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0919 22:33:34.382200   67622 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 22:33:34.860984   67622 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 22:33:34.940846   67622 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 22:33:34.940919   67622 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 22:33:35.161325   67622 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 22:33:35.301598   67622 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 22:33:35.610006   67622 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 22:33:35.767736   67622 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 22:33:36.001912   67622 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 22:33:36.002376   67622 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 22:33:36.005697   67622 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 22:33:36.010843   67622 out.go:252]   - Booting up control plane ...
	I0919 22:33:36.010955   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 22:33:36.011044   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 22:33:36.011162   67622 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 22:33:36.018352   67622 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 22:33:36.018463   67622 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 22:33:36.024835   67622 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 22:33:36.025002   67622 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 22:33:36.025072   67622 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 22:33:36.099408   67622 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 22:33:36.099593   67622 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 22:33:37.100521   67622 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001186505s
	I0919 22:33:37.103674   67622 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 22:33:37.103813   67622 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I0919 22:33:37.103961   67622 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 22:33:37.104092   67622 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 22:33:38.781776   67622 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.678113429s
	I0919 22:33:39.011334   67622 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 1.907735584s
	I0919 22:33:43.273677   67622 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 6.17006372s
	I0919 22:33:43.285923   67622 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 22:33:43.298989   67622 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 22:33:43.310631   67622 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 22:33:43.310870   67622 kubeadm.go:310] [mark-control-plane] Marking the node ha-984158 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 22:33:43.319951   67622 kubeadm.go:310] [bootstrap-token] Using token: wc3lep.4w3ocubibd25hbwe
	I0919 22:33:43.321976   67622 out.go:252]   - Configuring RBAC rules ...
	I0919 22:33:43.322154   67622 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 22:33:43.325670   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 22:33:43.333517   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 22:33:43.338509   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 22:33:43.342046   67622 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 22:33:43.345237   67622 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 22:33:43.680686   67622 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 22:33:44.099041   67622 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 22:33:44.680531   67622 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 22:33:44.681480   67622 kubeadm.go:310] 
	I0919 22:33:44.681572   67622 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 22:33:44.681591   67622 kubeadm.go:310] 
	I0919 22:33:44.681690   67622 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 22:33:44.681708   67622 kubeadm.go:310] 
	I0919 22:33:44.681761   67622 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 22:33:44.681854   67622 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 22:33:44.681910   67622 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 22:33:44.681916   67622 kubeadm.go:310] 
	I0919 22:33:44.681968   67622 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 22:33:44.681978   67622 kubeadm.go:310] 
	I0919 22:33:44.682015   67622 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 22:33:44.682021   67622 kubeadm.go:310] 
	I0919 22:33:44.682066   67622 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 22:33:44.682162   67622 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 22:33:44.682244   67622 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 22:33:44.682258   67622 kubeadm.go:310] 
	I0919 22:33:44.682378   67622 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 22:33:44.682497   67622 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 22:33:44.682510   67622 kubeadm.go:310] 
	I0919 22:33:44.682620   67622 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token wc3lep.4w3ocubibd25hbwe \
	I0919 22:33:44.682733   67622 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 \
	I0919 22:33:44.682757   67622 kubeadm.go:310] 	--control-plane 
	I0919 22:33:44.682761   67622 kubeadm.go:310] 
	I0919 22:33:44.682837   67622 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 22:33:44.682844   67622 kubeadm.go:310] 
	I0919 22:33:44.682919   67622 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token wc3lep.4w3ocubibd25hbwe \
	I0919 22:33:44.683036   67622 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 
	I0919 22:33:44.685970   67622 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 22:33:44.686071   67622 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 22:33:44.686097   67622 cni.go:84] Creating CNI manager for ""
	I0919 22:33:44.686119   67622 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0919 22:33:44.688616   67622 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 22:33:44.690471   67622 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 22:33:44.695364   67622 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 22:33:44.695381   67622 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 22:33:44.715791   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 22:33:44.939557   67622 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 22:33:44.939639   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:44.939678   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158 minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=true
	I0919 22:33:45.023827   67622 ops.go:34] apiserver oom_adj: -16
	I0919 22:33:45.023957   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:45.524455   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:46.024018   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:46.524600   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.024332   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.524121   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 22:33:47.592879   67622 kubeadm.go:1105] duration metric: took 2.653303844s to wait for elevateKubeSystemPrivileges
	I0919 22:33:47.592920   67622 kubeadm.go:394] duration metric: took 16.47455539s to StartCluster
	I0919 22:33:47.592944   67622 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:47.593012   67622 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:33:47.593661   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:47.593878   67622 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:47.593899   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:33:47.593915   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 22:33:47.593910   67622 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:33:47.593968   67622 addons.go:69] Setting storage-provisioner=true in profile "ha-984158"
	I0919 22:33:47.593987   67622 addons.go:238] Setting addon storage-provisioner=true in "ha-984158"
	I0919 22:33:47.594014   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:47.594020   67622 addons.go:69] Setting default-storageclass=true in profile "ha-984158"
	I0919 22:33:47.594052   67622 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-984158"
	I0919 22:33:47.594180   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:47.594397   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.594490   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.616114   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:33:47.616790   67622 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:33:47.616815   67622 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:33:47.616821   67622 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:33:47.616827   67622 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:33:47.616832   67622 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:33:47.616874   67622 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:33:47.617290   67622 addons.go:238] Setting addon default-storageclass=true in "ha-984158"
	I0919 22:33:47.617334   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:47.617664   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:47.618198   67622 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 22:33:47.619811   67622 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:33:47.619828   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 22:33:47.619873   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:47.639214   67622 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 22:33:47.639233   67622 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 22:33:47.639292   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:47.639429   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:47.661245   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:47.673462   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 22:33:47.757401   67622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 22:33:47.772885   67622 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 22:33:47.832329   67622 start.go:976] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0919 22:33:48.046946   67622 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 22:33:48.048036   67622 addons.go:514] duration metric: took 454.124749ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 22:33:48.048079   67622 start.go:246] waiting for cluster config update ...
	I0919 22:33:48.048094   67622 start.go:255] writing updated cluster config ...
	I0919 22:33:48.049801   67622 out.go:203] 
	I0919 22:33:48.051165   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:48.051243   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:48.053137   67622 out.go:179] * Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	I0919 22:33:48.054674   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:33:48.056311   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:33:48.057779   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:48.057806   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:33:48.057888   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:33:48.057928   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:33:48.057940   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:33:48.058063   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:48.078572   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:33:48.078592   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:33:48.078612   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:33:48.078641   67622 start.go:360] acquireMachinesLock for ha-984158-m02: {Name:mk33ccd18791cf0a87d18f7af68677fa10224c04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:33:48.078744   67622 start.go:364] duration metric: took 83.645µs to acquireMachinesLock for "ha-984158-m02"
	I0919 22:33:48.078773   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:48.078850   67622 start.go:125] createHost starting for "m02" (driver="docker")
	I0919 22:33:48.081555   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:33:48.081669   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:33:48.081703   67622 client.go:168] LocalClient.Create starting
	I0919 22:33:48.081781   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:33:48.081822   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:48.081843   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:48.081910   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:33:48.081940   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:33:48.081960   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:33:48.082241   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:48.099940   67622 network_create.go:77] Found existing network {name:ha-984158 subnet:0xc0016638f0 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:33:48.099978   67622 kic.go:121] calculated static IP "192.168.49.3" for the "ha-984158-m02" container
	I0919 22:33:48.100047   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:33:48.119768   67622 cli_runner.go:164] Run: docker volume create ha-984158-m02 --label name.minikube.sigs.k8s.io=ha-984158-m02 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:33:48.140861   67622 oci.go:103] Successfully created a docker volume ha-984158-m02
	I0919 22:33:48.140948   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-m02-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m02 --entrypoint /usr/bin/test -v ha-984158-m02:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:33:48.564029   67622 oci.go:107] Successfully prepared a docker volume ha-984158-m02
	I0919 22:33:48.564088   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:33:48.564128   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:33:48.564199   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:33:52.827364   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m02:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.263115206s)
	I0919 22:33:52.827395   67622 kic.go:203] duration metric: took 4.263265347s to extract preloaded images to volume ...
	W0919 22:33:52.827486   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:33:52.827514   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:33:52.827554   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:33:52.885075   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158-m02 --name ha-984158-m02 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m02 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158-m02 --network ha-984158 --ip 192.168.49.3 --volume ha-984158-m02:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:33:53.180687   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Running}}
	I0919 22:33:53.199679   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.219636   67622 cli_runner.go:164] Run: docker exec ha-984158-m02 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:33:53.277586   67622 oci.go:144] the created container "ha-984158-m02" has a running status.
	I0919 22:33:53.277613   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa...
	I0919 22:33:53.439379   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:33:53.439435   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:33:53.481669   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.502631   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:33:53.502661   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158-m02 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:33:53.550818   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:33:53.569934   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:33:53.570033   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.591163   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.591567   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.591594   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:33:53.732425   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:33:53.732454   67622 ubuntu.go:182] provisioning hostname "ha-984158-m02"
	I0919 22:33:53.732620   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.753544   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.753771   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.753787   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m02 && echo "ha-984158-m02" | sudo tee /etc/hostname
	I0919 22:33:53.905778   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:33:53.905859   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:53.925947   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:53.926237   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:53.926262   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:33:54.064017   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:33:54.064058   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:33:54.064091   67622 ubuntu.go:190] setting up certificates
	I0919 22:33:54.064128   67622 provision.go:84] configureAuth start
	I0919 22:33:54.064205   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:54.083365   67622 provision.go:143] copyHostCerts
	I0919 22:33:54.083408   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:54.083437   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:33:54.083446   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:33:54.083518   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:33:54.083599   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:54.083619   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:33:54.083625   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:33:54.083651   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:33:54.083695   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:54.083712   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:33:54.083718   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:33:54.083741   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:33:54.083825   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m02 san=[127.0.0.1 192.168.49.3 ha-984158-m02 localhost minikube]
	I0919 22:33:54.283812   67622 provision.go:177] copyRemoteCerts
	I0919 22:33:54.283869   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:33:54.283908   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.302357   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:54.401996   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:33:54.402067   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:33:54.430462   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:33:54.430540   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:33:54.457015   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:33:54.457097   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:33:54.483980   67622 provision.go:87] duration metric: took 419.834494ms to configureAuth
	I0919 22:33:54.484006   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:33:54.484189   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:54.484291   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.502801   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:33:54.503005   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32788 <nil> <nil>}
	I0919 22:33:54.503020   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:33:54.741937   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:33:54.741974   67622 machine.go:96] duration metric: took 1.172016504s to provisionDockerMachine
	I0919 22:33:54.741989   67622 client.go:171] duration metric: took 6.660276334s to LocalClient.Create
	I0919 22:33:54.742015   67622 start.go:167] duration metric: took 6.660346483s to libmachine.API.Create "ha-984158"
	I0919 22:33:54.742030   67622 start.go:293] postStartSetup for "ha-984158-m02" (driver="docker")
	I0919 22:33:54.742043   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:33:54.742141   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:33:54.742204   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.760779   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:54.861057   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:33:54.864884   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:33:54.864926   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:33:54.864936   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:33:54.864942   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:33:54.864952   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:33:54.865018   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:33:54.865096   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:33:54.865119   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:33:54.865208   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:33:54.874518   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:54.902675   67622 start.go:296] duration metric: took 160.632418ms for postStartSetup
	I0919 22:33:54.903619   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:54.921915   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:33:54.922275   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:33:54.922332   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:54.939498   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.032204   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:33:55.036544   67622 start.go:128] duration metric: took 6.957677622s to createHost
	I0919 22:33:55.036576   67622 start.go:83] releasing machines lock for "ha-984158-m02", held for 6.957813538s
	I0919 22:33:55.036645   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:33:55.056621   67622 out.go:179] * Found network options:
	I0919 22:33:55.058171   67622 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:33:55.059521   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:33:55.059575   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:33:55.059642   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:33:55.059693   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:55.059730   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:33:55.059795   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:33:55.079269   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.079505   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32788 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:33:55.307919   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:33:55.312965   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:55.336548   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:33:55.336628   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:33:55.368875   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:33:55.368896   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:33:55.368929   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:33:55.368975   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:33:55.384084   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:33:55.396627   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:33:55.396684   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:33:55.411878   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:33:55.426921   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:33:55.498750   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:33:55.574511   67622 docker.go:234] disabling docker service ...
	I0919 22:33:55.574592   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:33:55.592451   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:33:55.605407   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:33:55.676576   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:33:55.779960   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:33:55.791691   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:33:55.810222   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:33:55.810287   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.823669   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:33:55.823742   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.835957   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.848163   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.862113   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:33:55.874185   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.886226   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.904556   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:33:55.915914   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:33:55.925425   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:33:55.934730   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:56.048946   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:33:56.146544   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:33:56.146625   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:33:56.150812   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:33:56.150868   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:33:56.155192   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:33:56.191696   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:33:56.191785   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:56.233991   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:33:56.274090   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:33:56.275720   67622 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:33:56.276812   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:33:56.294583   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:33:56.298596   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:56.311418   67622 mustload.go:65] Loading cluster: ha-984158
	I0919 22:33:56.311645   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:33:56.311889   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:33:56.330141   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:56.330381   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.3
	I0919 22:33:56.330391   67622 certs.go:194] generating shared ca certs ...
	I0919 22:33:56.330404   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.330513   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:33:56.330548   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:33:56.330558   67622 certs.go:256] generating profile certs ...
	I0919 22:33:56.330645   67622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:33:56.330671   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648
	I0919 22:33:56.330686   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:33:56.589696   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 ...
	I0919 22:33:56.589724   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648: {Name:mk231e62d196ad4ac4ba36bf02a832f78de0258d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.589931   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648 ...
	I0919 22:33:56.589950   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648: {Name:mkf30412a461a8bacfd366640c7d4da1146a9418 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:33:56.590056   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.73e1c648 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:33:56.590233   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:33:56.590374   67622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:33:56.590389   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:33:56.590402   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:33:56.590416   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:33:56.590429   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:33:56.590440   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:33:56.590450   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:33:56.590459   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:33:56.590476   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:33:56.590527   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:33:56.590552   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:33:56.590561   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:33:56.590584   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:33:56.590605   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:33:56.590626   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:33:56.590665   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:33:56.590692   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:33:56.590708   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:56.590721   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:33:56.590767   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:56.609877   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:56.698485   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:33:56.703209   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:33:56.716550   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:33:56.720735   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:33:56.733890   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:33:56.737616   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:33:56.750557   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:33:56.754948   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:33:56.770690   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:33:56.774864   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:33:56.787587   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:33:56.791154   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:33:56.804497   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:33:56.832411   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:33:56.858185   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:33:56.885311   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:33:56.911248   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0919 22:33:56.937552   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:33:56.963365   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:33:56.988811   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:33:57.014413   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:33:57.043525   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:33:57.069549   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:33:57.095993   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:33:57.115254   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:33:57.135395   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:33:57.155031   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:33:57.175220   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:33:57.194674   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:33:57.215027   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:33:57.235048   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:33:57.240702   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:33:57.251492   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.255754   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.255806   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:33:57.263388   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:33:57.274606   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:33:57.285494   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.289707   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.289758   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:33:57.296995   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:33:57.307702   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:33:57.318927   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.323131   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.323194   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:33:57.330266   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:33:57.340891   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:33:57.344726   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:33:57.344784   67622 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0919 22:33:57.344872   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:33:57.344897   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:33:57.344937   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:33:57.357462   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:33:57.357529   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:33:57.357582   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:33:57.367667   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:33:57.367722   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:33:57.377333   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:33:57.395969   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:33:57.418145   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:33:57.439308   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:33:57.443458   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:33:57.454967   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:33:57.522382   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:33:57.545690   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:33:57.545979   67622 start.go:317] joinCluster: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mount
Type:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:33:57.546124   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:33:57.546185   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:33:57.565712   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:33:57.714381   67622 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:33:57.714452   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0rc9ka.7s4jxjfzbvya269x --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443"
	I0919 22:34:14.891768   67622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token 0rc9ka.7s4jxjfzbvya269x --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m02 --control-plane --apiserver-advertise-address=192.168.49.3 --apiserver-bind-port=8443": (17.177290621s)
	I0919 22:34:14.891806   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:34:15.112649   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158-m02 minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=false
	I0919 22:34:15.189152   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-984158-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:34:15.268843   67622 start.go:319] duration metric: took 17.722860685s to joinCluster
	I0919 22:34:15.268921   67622 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:15.269212   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:15.270715   67622 out.go:179] * Verifying Kubernetes components...
	I0919 22:34:15.272193   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:15.373529   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:15.387143   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:34:15.387217   67622 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:34:15.387440   67622 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m02" to be "Ready" ...
	W0919 22:34:17.391040   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:19.391218   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:21.391885   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:23.891865   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	W0919 22:34:25.892208   67622 node_ready.go:57] node "ha-984158-m02" has "Ready":"False" status (will retry)
	I0919 22:34:28.391466   67622 node_ready.go:49] node "ha-984158-m02" is "Ready"
	I0919 22:34:28.391502   67622 node_ready.go:38] duration metric: took 13.004045549s for node "ha-984158-m02" to be "Ready" ...
	I0919 22:34:28.391521   67622 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:34:28.391578   67622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:34:28.403875   67622 api_server.go:72] duration metric: took 13.134915716s to wait for apiserver process to appear ...
	I0919 22:34:28.403907   67622 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:34:28.403928   67622 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:34:28.409570   67622 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:34:28.410599   67622 api_server.go:141] control plane version: v1.34.0
	I0919 22:34:28.410630   67622 api_server.go:131] duration metric: took 6.715556ms to wait for apiserver health ...
	I0919 22:34:28.410646   67622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:34:28.415646   67622 system_pods.go:59] 17 kube-system pods found
	I0919 22:34:28.415679   67622 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:34:28.415685   67622 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:34:28.415689   67622 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:34:28.415692   67622 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:34:28.415695   67622 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:34:28.415698   67622 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:34:28.415701   67622 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:34:28.415704   67622 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:34:28.415707   67622 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:34:28.415710   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:34:28.415713   67622 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:34:28.415715   67622 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:34:28.415718   67622 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:34:28.415721   67622 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:34:28.415723   67622 system_pods.go:61] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:34:28.415726   67622 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:34:28.415729   67622 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:34:28.415734   67622 system_pods.go:74] duration metric: took 5.082988ms to wait for pod list to return data ...
	I0919 22:34:28.415742   67622 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:34:28.418466   67622 default_sa.go:45] found service account: "default"
	I0919 22:34:28.418487   67622 default_sa.go:55] duration metric: took 2.73954ms for default service account to be created ...
	I0919 22:34:28.418498   67622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:34:28.422326   67622 system_pods.go:86] 17 kube-system pods found
	I0919 22:34:28.422351   67622 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:34:28.422357   67622 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:34:28.422361   67622 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:34:28.422365   67622 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:34:28.422368   67622 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:34:28.422376   67622 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:34:28.422379   67622 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:34:28.422383   67622 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:34:28.422386   67622 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:34:28.422390   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:34:28.422393   67622 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:34:28.422396   67622 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:34:28.422399   67622 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:34:28.422402   67622 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:34:28.422405   67622 system_pods.go:89] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:34:28.422408   67622 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:34:28.422415   67622 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:34:28.422421   67622 system_pods.go:126] duration metric: took 3.917676ms to wait for k8s-apps to be running ...
	I0919 22:34:28.422429   67622 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:34:28.422473   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:34:28.434607   67622 system_svc.go:56] duration metric: took 12.16943ms WaitForService to wait for kubelet
	I0919 22:34:28.434637   67622 kubeadm.go:578] duration metric: took 13.165683838s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:34:28.434659   67622 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:34:28.437727   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:34:28.437756   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:34:28.437777   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:34:28.437784   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:34:28.437791   67622 node_conditions.go:105] duration metric: took 3.125214ms to run NodePressure ...
	I0919 22:34:28.437804   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:34:28.437837   67622 start.go:255] writing updated cluster config ...
	I0919 22:34:28.440033   67622 out.go:203] 
	I0919 22:34:28.441576   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:28.441673   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:28.443252   67622 out.go:179] * Starting "ha-984158-m03" control-plane node in "ha-984158" cluster
	I0919 22:34:28.444693   67622 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:34:28.446038   67622 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:34:28.447156   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:34:28.447185   67622 cache.go:58] Caching tarball of preloaded images
	I0919 22:34:28.447193   67622 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:34:28.447285   67622 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:34:28.447301   67622 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:34:28.447448   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:28.469851   67622 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:34:28.469873   67622 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:34:28.469889   67622 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:34:28.469913   67622 start.go:360] acquireMachinesLock for ha-984158-m03: {Name:mkf33267bff56ae1cde0b805408b7f6393558146 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:34:28.470008   67622 start.go:364] duration metric: took 81.331µs to acquireMachinesLock for "ha-984158-m03"
	I0919 22:34:28.470041   67622 start.go:93] Provisioning new machine with config: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevi
rt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Sta
ticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:28.470170   67622 start.go:125] createHost starting for "m03" (driver="docker")
	I0919 22:34:28.472544   67622 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 22:34:28.472649   67622 start.go:159] libmachine.API.Create for "ha-984158" (driver="docker")
	I0919 22:34:28.472677   67622 client.go:168] LocalClient.Create starting
	I0919 22:34:28.472742   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 22:34:28.472780   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:34:28.472799   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:34:28.472861   67622 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 22:34:28.472888   67622 main.go:141] libmachine: Decoding PEM data...
	I0919 22:34:28.472901   67622 main.go:141] libmachine: Parsing certificate...
	I0919 22:34:28.473209   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:28.490760   67622 network_create.go:77] Found existing network {name:ha-984158 subnet:0xc001af8060 gateway:[0 0 0 0 0 0 0 0 0 0 255 255 192 168 49 1] mtu:1500}
	I0919 22:34:28.490805   67622 kic.go:121] calculated static IP "192.168.49.4" for the "ha-984158-m03" container
	I0919 22:34:28.490880   67622 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 22:34:28.509896   67622 cli_runner.go:164] Run: docker volume create ha-984158-m03 --label name.minikube.sigs.k8s.io=ha-984158-m03 --label created_by.minikube.sigs.k8s.io=true
	I0919 22:34:28.528837   67622 oci.go:103] Successfully created a docker volume ha-984158-m03
	I0919 22:34:28.528911   67622 cli_runner.go:164] Run: docker run --rm --name ha-984158-m03-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m03 --entrypoint /usr/bin/test -v ha-984158-m03:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 22:34:28.927062   67622 oci.go:107] Successfully prepared a docker volume ha-984158-m03
	I0919 22:34:28.927168   67622 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:34:28.927199   67622 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 22:34:28.927268   67622 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	I0919 22:34:33.212737   67622 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v ha-984158-m03:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.285428249s)
	I0919 22:34:33.212770   67622 kic.go:203] duration metric: took 4.285569649s to extract preloaded images to volume ...
	W0919 22:34:33.212842   67622 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 22:34:33.212868   67622 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 22:34:33.212907   67622 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 22:34:33.271794   67622 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname ha-984158-m03 --name ha-984158-m03 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=ha-984158-m03 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=ha-984158-m03 --network ha-984158 --ip 192.168.49.4 --volume ha-984158-m03:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 22:34:33.577096   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Running}}
	I0919 22:34:33.595112   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:33.615056   67622 cli_runner.go:164] Run: docker exec ha-984158-m03 stat /var/lib/dpkg/alternatives/iptables
	I0919 22:34:33.665241   67622 oci.go:144] the created container "ha-984158-m03" has a running status.
	I0919 22:34:33.665277   67622 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa...
	I0919 22:34:34.167881   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I0919 22:34:34.167925   67622 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 22:34:34.195311   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:34.214983   67622 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 22:34:34.215010   67622 kic_runner.go:114] Args: [docker exec --privileged ha-984158-m03 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 22:34:34.269287   67622 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:34:34.290822   67622 machine.go:93] provisionDockerMachine start ...
	I0919 22:34:34.290917   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.310406   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.310629   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.310645   67622 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:34:34.449392   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:34:34.449418   67622 ubuntu.go:182] provisioning hostname "ha-984158-m03"
	I0919 22:34:34.449477   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.470431   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.470643   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.470659   67622 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m03 && echo "ha-984158-m03" | sudo tee /etc/hostname
	I0919 22:34:34.622394   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:34:34.622486   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.641997   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:34.642244   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:34.642262   67622 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:34:34.780134   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:34:34.780169   67622 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:34:34.780191   67622 ubuntu.go:190] setting up certificates
	I0919 22:34:34.780205   67622 provision.go:84] configureAuth start
	I0919 22:34:34.780271   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:34.799584   67622 provision.go:143] copyHostCerts
	I0919 22:34:34.799658   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:34:34.799692   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:34:34.799701   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:34:34.799769   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:34:34.799851   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:34:34.799870   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:34:34.799877   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:34:34.799904   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:34:34.799966   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:34:34.799983   67622 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:34:34.799989   67622 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:34:34.800012   67622 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:34:34.800115   67622 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m03 san=[127.0.0.1 192.168.49.4 ha-984158-m03 localhost minikube]
	I0919 22:34:34.944518   67622 provision.go:177] copyRemoteCerts
	I0919 22:34:34.944575   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:34:34.944606   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:34.963408   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.062939   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:34:35.063013   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:34:35.095527   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:34:35.095582   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:34:35.122809   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:34:35.122880   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:34:35.150023   67622 provision.go:87] duration metric: took 369.804514ms to configureAuth
	I0919 22:34:35.150056   67622 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:34:35.150311   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:35.150452   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.170186   67622 main.go:141] libmachine: Using SSH client type: native
	I0919 22:34:35.170414   67622 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32793 <nil> <nil>}
	I0919 22:34:35.170546   67622 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:34:35.424872   67622 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:34:35.424903   67622 machine.go:96] duration metric: took 1.1340482s to provisionDockerMachine
	I0919 22:34:35.424913   67622 client.go:171] duration metric: took 6.952229218s to LocalClient.Create
	I0919 22:34:35.424932   67622 start.go:167] duration metric: took 6.95228363s to libmachine.API.Create "ha-984158"
	I0919 22:34:35.424941   67622 start.go:293] postStartSetup for "ha-984158-m03" (driver="docker")
	I0919 22:34:35.424950   67622 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:34:35.425005   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:34:35.425044   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.443122   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.542973   67622 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:34:35.547045   67622 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:34:35.547098   67622 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:34:35.547140   67622 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:34:35.547149   67622 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:34:35.547164   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:34:35.547243   67622 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:34:35.547346   67622 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:34:35.547359   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:34:35.547461   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:34:35.557222   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:34:35.587487   67622 start.go:296] duration metric: took 162.532916ms for postStartSetup
	I0919 22:34:35.587898   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:35.605883   67622 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:34:35.606188   67622 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:34:35.606230   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.625506   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.719327   67622 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:34:35.724945   67622 start.go:128] duration metric: took 7.25475977s to createHost
	I0919 22:34:35.724975   67622 start.go:83] releasing machines lock for "ha-984158-m03", held for 7.25495293s
	I0919 22:34:35.725066   67622 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:34:35.746436   67622 out.go:179] * Found network options:
	I0919 22:34:35.748613   67622 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:34:35.750204   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750230   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750252   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:34:35.750261   67622 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:34:35.750333   67622 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:34:35.750367   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.750414   67622 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:34:35.750481   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:34:35.770785   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:35.771520   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:34:36.012617   67622 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:34:36.017809   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:36.041480   67622 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:34:36.041572   67622 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:34:36.074662   67622 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 22:34:36.074688   67622 start.go:495] detecting cgroup driver to use...
	I0919 22:34:36.074719   67622 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:34:36.074766   67622 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:34:36.093544   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:34:36.107751   67622 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:34:36.107801   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:34:36.123972   67622 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:34:36.140690   67622 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:34:36.213915   67622 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:34:36.293890   67622 docker.go:234] disabling docker service ...
	I0919 22:34:36.293970   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:34:36.315495   67622 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:34:36.329394   67622 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:34:36.401603   67622 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:34:36.566519   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:34:36.580168   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:34:36.598521   67622 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:34:36.598580   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.612994   67622 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:34:36.613052   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.625369   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.636513   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.647884   67622 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:34:36.658467   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.670077   67622 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.688463   67622 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:34:36.700347   67622 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:34:36.710192   67622 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:34:36.722230   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:36.786818   67622 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:34:36.889165   67622 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:34:36.889244   67622 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:34:36.893369   67622 start.go:563] Will wait 60s for crictl version
	I0919 22:34:36.893434   67622 ssh_runner.go:195] Run: which crictl
	I0919 22:34:36.897483   67622 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:34:36.935462   67622 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:34:36.935558   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:34:36.971682   67622 ssh_runner.go:195] Run: crio --version
	I0919 22:34:37.011225   67622 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:34:37.012939   67622 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:34:37.014619   67622 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:34:37.016609   67622 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:34:37.035904   67622 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:34:37.040209   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:37.053278   67622 mustload.go:65] Loading cluster: ha-984158
	I0919 22:34:37.053547   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:37.053803   67622 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:34:37.073847   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:34:37.074139   67622 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.4
	I0919 22:34:37.074157   67622 certs.go:194] generating shared ca certs ...
	I0919 22:34:37.074173   67622 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.074282   67622 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:34:37.074329   67622 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:34:37.074340   67622 certs.go:256] generating profile certs ...
	I0919 22:34:37.074417   67622 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:34:37.074441   67622 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7
	I0919 22:34:37.074452   67622 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:34:37.137117   67622 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 ...
	I0919 22:34:37.137145   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7: {Name:mk19194d581061c0301a7ebaafcb4f75dd6f88da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.137332   67622 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7 ...
	I0919 22:34:37.137346   67622 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7: {Name:mkdc03dbd8fb2d6fc0a8ac2bb45b7aa14987fe74 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:34:37.137418   67622 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.2fccefa7 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:34:37.137557   67622 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:34:37.137679   67622 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:34:37.137694   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:34:37.137706   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:34:37.137719   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:34:37.137732   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:34:37.137744   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:34:37.137756   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:34:37.137768   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:34:37.137780   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:34:37.137836   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:34:37.137865   67622 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:34:37.137875   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:34:37.137895   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:34:37.137918   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:34:37.137950   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:34:37.137989   67622 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:34:37.138014   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.138027   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.138042   67622 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.138089   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:34:37.156562   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:34:37.245522   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:34:37.249874   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:34:37.263553   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:34:37.267840   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:34:37.282009   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:34:37.286008   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:34:37.299365   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:34:37.303011   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:34:37.316000   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:34:37.319968   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:34:37.335075   67622 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:34:37.339209   67622 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:34:37.352485   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:34:37.379736   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:34:37.405614   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:34:37.430819   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:34:37.457286   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0919 22:34:37.485582   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:34:37.511990   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:34:37.539620   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:34:37.566336   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:34:37.597966   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:34:37.624934   67622 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:34:37.652281   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:34:37.672835   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:34:37.693826   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:34:37.712995   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:34:37.735150   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:34:37.755380   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:34:37.775695   67622 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:34:37.796705   67622 ssh_runner.go:195] Run: openssl version
	I0919 22:34:37.802715   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:34:37.814531   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.819194   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.819264   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:34:37.826904   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:34:37.838758   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:34:37.849465   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.853251   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.853305   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:34:37.860596   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:34:37.872602   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:34:37.885280   67622 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.889622   67622 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.889680   67622 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:34:37.896943   67622 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:34:37.908337   67622 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:34:37.912368   67622 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 22:34:37.912422   67622 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0919 22:34:37.912521   67622 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:34:37.912549   67622 kube-vip.go:115] generating kube-vip config ...
	I0919 22:34:37.912589   67622 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:34:37.927225   67622 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:34:37.927295   67622 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:34:37.927349   67622 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:34:37.937175   67622 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:34:37.937241   67622 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:34:37.946525   67622 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:34:37.966151   67622 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:34:37.991832   67622 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:34:38.014409   67622 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:34:38.018813   67622 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:34:38.034487   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:38.100010   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:38.123308   67622 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:34:38.123594   67622 start.go:317] joinCluster: &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:34:38.123717   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0919 22:34:38.123769   67622 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:34:38.144625   67622 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:34:38.293340   67622 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:38.293387   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xvegph.tfd7m7k591l3snif --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443"
	I0919 22:34:51.872651   67622 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xvegph.tfd7m7k591l3snif --discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-984158-m03 --control-plane --apiserver-advertise-address=192.168.49.4 --apiserver-bind-port=8443": (13.579238089s)
	I0919 22:34:51.872690   67622 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0919 22:34:52.127072   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-984158-m03 minikube.k8s.io/updated_at=2025_09_19T22_34_52_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=ha-984158 minikube.k8s.io/primary=false
	I0919 22:34:52.206869   67622 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-984158-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0919 22:34:52.293044   67622 start.go:319] duration metric: took 14.169442875s to joinCluster
	I0919 22:34:52.293202   67622 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:34:52.293464   67622 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:34:52.295014   67622 out.go:179] * Verifying Kubernetes components...
	I0919 22:34:52.296471   67622 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:34:52.405642   67622 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:34:52.419776   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:34:52.419840   67622 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:34:52.420054   67622 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m03" to be "Ready" ...
	W0919 22:34:54.424074   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:34:56.924240   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:34:58.925198   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:35:01.425329   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	W0919 22:35:03.923474   67622 node_ready.go:57] node "ha-984158-m03" has "Ready":"False" status (will retry)
	I0919 22:35:05.424225   67622 node_ready.go:49] node "ha-984158-m03" is "Ready"
	I0919 22:35:05.424253   67622 node_ready.go:38] duration metric: took 13.004161929s for node "ha-984158-m03" to be "Ready" ...
	I0919 22:35:05.424266   67622 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:35:05.424326   67622 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:35:05.438342   67622 api_server.go:72] duration metric: took 13.14509411s to wait for apiserver process to appear ...
	I0919 22:35:05.438367   67622 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:35:05.438390   67622 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:35:05.442575   67622 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:35:05.443547   67622 api_server.go:141] control plane version: v1.34.0
	I0919 22:35:05.443573   67622 api_server.go:131] duration metric: took 5.19876ms to wait for apiserver health ...
	I0919 22:35:05.443582   67622 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:35:05.452030   67622 system_pods.go:59] 24 kube-system pods found
	I0919 22:35:05.452062   67622 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:35:05.452067   67622 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:35:05.452073   67622 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:35:05.452079   67622 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:35:05.452084   67622 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:35:05.452089   67622 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:35:05.452094   67622 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:35:05.452129   67622 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:35:05.452136   67622 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:35:05.452141   67622 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:35:05.452146   67622 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:35:05.452151   67622 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:35:05.452156   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:35:05.452161   67622 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:35:05.452165   67622 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:35:05.452170   67622 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:35:05.452174   67622 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:35:05.452179   67622 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:35:05.452184   67622 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:35:05.452188   67622 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:35:05.452193   67622 system_pods.go:61] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:35:05.452199   67622 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:35:05.452205   67622 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:35:05.452208   67622 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:35:05.452217   67622 system_pods.go:74] duration metric: took 8.62798ms to wait for pod list to return data ...
	I0919 22:35:05.452227   67622 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:35:05.455571   67622 default_sa.go:45] found service account: "default"
	I0919 22:35:05.455594   67622 default_sa.go:55] duration metric: took 3.361804ms for default service account to be created ...
	I0919 22:35:05.455603   67622 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:35:05.460748   67622 system_pods.go:86] 24 kube-system pods found
	I0919 22:35:05.460777   67622 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:35:05.460783   67622 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running
	I0919 22:35:05.460787   67622 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:35:05.460790   67622 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:35:05.460793   67622 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:35:05.460798   67622 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:35:05.460801   67622 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:35:05.460803   67622 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:35:05.460806   67622 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:35:05.460809   67622 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:35:05.460812   67622 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:35:05.460815   67622 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:35:05.460818   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:35:05.460821   67622 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:35:05.460826   67622 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:35:05.460829   67622 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:35:05.460832   67622 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:35:05.460835   67622 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:35:05.460838   67622 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:35:05.460841   67622 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:35:05.460844   67622 system_pods.go:89] "kube-vip-ha-984158" [712c6531-a133-444b-9c1d-ea84f8f6c1fa] Running
	I0919 22:35:05.460847   67622 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:35:05.460850   67622 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:35:05.460853   67622 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:35:05.460859   67622 system_pods.go:126] duration metric: took 5.251911ms to wait for k8s-apps to be running ...
	I0919 22:35:05.460866   67622 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:35:05.460906   67622 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:35:05.475728   67622 system_svc.go:56] duration metric: took 14.850569ms WaitForService to wait for kubelet
	I0919 22:35:05.475767   67622 kubeadm.go:578] duration metric: took 13.182524274s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:35:05.475791   67622 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:35:05.479992   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480016   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480028   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480032   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480035   67622 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:35:05.480038   67622 node_conditions.go:123] node cpu capacity is 8
	I0919 22:35:05.480042   67622 node_conditions.go:105] duration metric: took 4.246099ms to run NodePressure ...
	I0919 22:35:05.480052   67622 start.go:241] waiting for startup goroutines ...
	I0919 22:35:05.480076   67622 start.go:255] writing updated cluster config ...
	I0919 22:35:05.480391   67622 ssh_runner.go:195] Run: rm -f paused
	I0919 22:35:05.484443   67622 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:35:05.484864   67622 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:35:05.488632   67622 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-5gnbx" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.494158   67622 pod_ready.go:94] pod "coredns-66bc5c9577-5gnbx" is "Ready"
	I0919 22:35:05.494184   67622 pod_ready.go:86] duration metric: took 5.519921ms for pod "coredns-66bc5c9577-5gnbx" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.494194   67622 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-ltjmz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.498979   67622 pod_ready.go:94] pod "coredns-66bc5c9577-ltjmz" is "Ready"
	I0919 22:35:05.499001   67622 pod_ready.go:86] duration metric: took 4.801852ms for pod "coredns-66bc5c9577-ltjmz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.501488   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.506605   67622 pod_ready.go:94] pod "etcd-ha-984158" is "Ready"
	I0919 22:35:05.506631   67622 pod_ready.go:86] duration metric: took 5.121241ms for pod "etcd-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.506643   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.511687   67622 pod_ready.go:94] pod "etcd-ha-984158-m02" is "Ready"
	I0919 22:35:05.511711   67622 pod_ready.go:86] duration metric: took 5.063338ms for pod "etcd-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.511721   67622 pod_ready.go:83] waiting for pod "etcd-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:05.686203   67622 request.go:683] "Waited before sending request" delay="174.390617ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/etcd-ha-984158-m03"
	I0919 22:35:05.886318   67622 request.go:683] "Waited before sending request" delay="196.323175ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:05.889520   67622 pod_ready.go:94] pod "etcd-ha-984158-m03" is "Ready"
	I0919 22:35:05.889544   67622 pod_ready.go:86] duration metric: took 377.817661ms for pod "etcd-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.086145   67622 request.go:683] "Waited before sending request" delay="196.407438ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-apiserver"
	I0919 22:35:06.090017   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.285426   67622 request.go:683] "Waited before sending request" delay="195.307128ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158"
	I0919 22:35:06.486234   67622 request.go:683] "Waited before sending request" delay="197.363102ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:06.489211   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158" is "Ready"
	I0919 22:35:06.489239   67622 pod_ready.go:86] duration metric: took 399.19471ms for pod "kube-apiserver-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.489249   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.685697   67622 request.go:683] "Waited before sending request" delay="196.373047ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158-m02"
	I0919 22:35:06.885918   67622 request.go:683] "Waited before sending request" delay="197.214097ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:06.888940   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158-m02" is "Ready"
	I0919 22:35:06.888966   67622 pod_ready.go:86] duration metric: took 399.709223ms for pod "kube-apiserver-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:06.888977   67622 pod_ready.go:83] waiting for pod "kube-apiserver-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.086320   67622 request.go:683] "Waited before sending request" delay="197.234187ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-984158-m03"
	I0919 22:35:07.286155   67622 request.go:683] "Waited before sending request" delay="196.391562ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:07.289116   67622 pod_ready.go:94] pod "kube-apiserver-ha-984158-m03" is "Ready"
	I0919 22:35:07.289145   67622 pod_ready.go:86] duration metric: took 400.160627ms for pod "kube-apiserver-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.485647   67622 request.go:683] "Waited before sending request" delay="196.369215ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-controller-manager"
	I0919 22:35:07.489356   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.685801   67622 request.go:683] "Waited before sending request" delay="196.331241ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158"
	I0919 22:35:07.886175   67622 request.go:683] "Waited before sending request" delay="197.36953ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:07.889268   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158" is "Ready"
	I0919 22:35:07.889292   67622 pod_ready.go:86] duration metric: took 399.911799ms for pod "kube-controller-manager-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:07.889300   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.085780   67622 request.go:683] "Waited before sending request" delay="196.397628ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158-m02"
	I0919 22:35:08.286293   67622 request.go:683] "Waited before sending request" delay="197.157746ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:08.289542   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158-m02" is "Ready"
	I0919 22:35:08.289565   67622 pod_ready.go:86] duration metric: took 400.260559ms for pod "kube-controller-manager-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.289585   67622 pod_ready.go:83] waiting for pod "kube-controller-manager-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.486054   67622 request.go:683] "Waited before sending request" delay="196.383406ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-984158-m03"
	I0919 22:35:08.685765   67622 request.go:683] "Waited before sending request" delay="196.365381ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:08.688911   67622 pod_ready.go:94] pod "kube-controller-manager-ha-984158-m03" is "Ready"
	I0919 22:35:08.688939   67622 pod_ready.go:86] duration metric: took 399.348524ms for pod "kube-controller-manager-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:08.885240   67622 request.go:683] "Waited before sending request" delay="196.197284ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=k8s-app%3Dkube-proxy"
	I0919 22:35:08.888653   67622 pod_ready.go:83] waiting for pod "kube-proxy-hdxxn" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.086194   67622 request.go:683] "Waited before sending request" delay="197.430633ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hdxxn"
	I0919 22:35:09.285936   67622 request.go:683] "Waited before sending request" delay="196.399441ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:09.289309   67622 pod_ready.go:94] pod "kube-proxy-hdxxn" is "Ready"
	I0919 22:35:09.289344   67622 pod_ready.go:86] duration metric: took 400.666867ms for pod "kube-proxy-hdxxn" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.289356   67622 pod_ready.go:83] waiting for pod "kube-proxy-k2drm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.485857   67622 request.go:683] "Waited before sending request" delay="196.368869ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-k2drm"
	I0919 22:35:09.685224   67622 request.go:683] "Waited before sending request" delay="196.312304ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:09.688202   67622 pod_ready.go:94] pod "kube-proxy-k2drm" is "Ready"
	I0919 22:35:09.688225   67622 pod_ready.go:86] duration metric: took 398.86315ms for pod "kube-proxy-k2drm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.688232   67622 pod_ready.go:83] waiting for pod "kube-proxy-plrn2" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:09.885674   67622 request.go:683] "Waited before sending request" delay="197.37394ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-proxy-plrn2"
	I0919 22:35:10.085404   67622 request.go:683] "Waited before sending request" delay="196.238234ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:10.088413   67622 pod_ready.go:94] pod "kube-proxy-plrn2" is "Ready"
	I0919 22:35:10.088435   67622 pod_ready.go:86] duration metric: took 400.198021ms for pod "kube-proxy-plrn2" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.285955   67622 request.go:683] "Waited before sending request" delay="197.399738ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods?labelSelector=component%3Dkube-scheduler"
	I0919 22:35:10.289773   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.486274   67622 request.go:683] "Waited before sending request" delay="196.397415ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158"
	I0919 22:35:10.685865   67622 request.go:683] "Waited before sending request" delay="196.354476ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158"
	I0919 22:35:10.688789   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158" is "Ready"
	I0919 22:35:10.688812   67622 pod_ready.go:86] duration metric: took 399.015441ms for pod "kube-scheduler-ha-984158" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.688821   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:10.886266   67622 request.go:683] "Waited before sending request" delay="197.365068ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158-m02"
	I0919 22:35:11.085685   67622 request.go:683] "Waited before sending request" delay="196.401015ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m02"
	I0919 22:35:11.088847   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158-m02" is "Ready"
	I0919 22:35:11.088884   67622 pod_ready.go:86] duration metric: took 400.056175ms for pod "kube-scheduler-ha-984158-m02" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.088895   67622 pod_ready.go:83] waiting for pod "kube-scheduler-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.285309   67622 request.go:683] "Waited before sending request" delay="196.306548ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-984158-m03"
	I0919 22:35:11.485951   67622 request.go:683] "Waited before sending request" delay="197.396443ms" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://192.168.49.254:8443/api/v1/nodes/ha-984158-m03"
	I0919 22:35:11.489000   67622 pod_ready.go:94] pod "kube-scheduler-ha-984158-m03" is "Ready"
	I0919 22:35:11.489026   67622 pod_ready.go:86] duration metric: took 400.124566ms for pod "kube-scheduler-ha-984158-m03" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 22:35:11.489036   67622 pod_ready.go:40] duration metric: took 6.004562578s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 22:35:11.533521   67622 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 22:35:11.535265   67622 out.go:179] * Done! kubectl is now configured to use "ha-984158" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 22:33:59 ha-984158 crio[940]: time="2025-09-19 22:33:59.550284463Z" level=info msg="Starting container: ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a" id=e0a3358c-8796-408f-934f-d6cba020a690 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:33:59 ha-984158 crio[940]: time="2025-09-19 22:33:59.559054866Z" level=info msg="Started container" PID=2323 containerID=ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a description=kube-system/coredns-66bc5c9577-5gnbx/coredns id=e0a3358c-8796-408f-934f-d6cba020a690 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a67546437e6cd1431d56127b35c686ec4fbef541821d81e817187eac2eac44ae
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.844458340Z" level=info msg="Running pod sandbox: default/busybox-7b57f96db7-rnjl7/POD" id=d0657219-f572-4248-9235-8842218cfa0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.844519430Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.863307191Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-rnjl7 Namespace:default ID:310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 UID:68cd1643-e7c7-480f-af91-8f2f4eafb766 NetNS:/var/run/netns/06be5280-8181-487d-a6d1-f625eae461d3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.863361143Z" level=info msg="Adding pod default_busybox-7b57f96db7-rnjl7 to CNI network \"kindnet\" (type=ptp)"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.877409166Z" level=info msg="Got pod network &{Name:busybox-7b57f96db7-rnjl7 Namespace:default ID:310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 UID:68cd1643-e7c7-480f-af91-8f2f4eafb766 NetNS:/var/run/netns/06be5280-8181-487d-a6d1-f625eae461d3 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.877580199Z" level=info msg="Checking pod default_busybox-7b57f96db7-rnjl7 for CNI network kindnet (type=ptp)"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.878483692Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.879359170Z" level=info msg="Ran pod sandbox 310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7 with infra container: default/busybox-7b57f96db7-rnjl7/POD" id=d0657219-f572-4248-9235-8842218cfa0c name=/runtime.v1.RuntimeService/RunPodSandbox
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.880607012Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=1735f4c5-1314-4a40-8ba8-c3ad07521ed5 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.880856313Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28 not found" id=1735f4c5-1314-4a40-8ba8-c3ad07521ed5 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.881636849Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28" id=7ea2e14f-0929-48b6-8660-f50891d76427 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:35:12 ha-984158 crio[940]: time="2025-09-19 22:35:12.882840066Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:35:13 ha-984158 crio[940]: time="2025-09-19 22:35:13.826935593Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28\""
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.299818076Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335" id=7ea2e14f-0929-48b6-8660-f50891d76427 name=/runtime.v1.ImageService/PullImage
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.300497300Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=93a0214d-e907-4422-9d10-19ea7fc4e56f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.301041675Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=93a0214d-e907-4422-9d10-19ea7fc4e56f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.301798545Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28" id=0a8490eb-33d4-479b-9676-b4224390f69a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.302421301Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,RepoTags:[gcr.io/k8s-minikube/busybox:1.28],RepoDigests:[gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335 gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12],Size_:1363676,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=0a8490eb-33d4-479b-9676-b4224390f69a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.305168065Z" level=info msg="Creating container: default/busybox-7b57f96db7-rnjl7/busybox" id=3cab5b69-2469-4018-a242-e29452d9df66 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.305267569Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.380968697Z" level=info msg="Created container 9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e: default/busybox-7b57f96db7-rnjl7/busybox" id=3cab5b69-2469-4018-a242-e29452d9df66 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.381641384Z" level=info msg="Starting container: 9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e" id=796c6084-24c1-4536-af4f-844053cc1347 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:35:15 ha-984158 crio[940]: time="2025-09-19 22:35:15.388597470Z" level=info msg="Started container" PID=2560 containerID=9169b9b095a98acd3968953f0258cb2ff749629d57b2643e038f00f70d59151e description=default/busybox-7b57f96db7-rnjl7/busybox id=796c6084-24c1-4536-af4f-844053cc1347 name=/runtime.v1.RuntimeService/StartContainer sandboxID=310dd81aa67393b3b12bd5c34ad215aa691624c1aeb8d42dc4ec234c6053e4f7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9169b9b095a98       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   2 minutes ago       Running             busybox                   0                   310dd81aa6739       busybox-7b57f96db7-rnjl7
	ea03ecb87a050       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      3 minutes ago       Running             coredns                   0                   a67546437e6cd       coredns-66bc5c9577-5gnbx
	d9aec8cde801c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       0                   f2f4dad3060cd       storage-provisioner
	7df7251c31862       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      3 minutes ago       Running             coredns                   0                   549805b340720       coredns-66bc5c9577-ltjmz
	66e8ff6b4b2da       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      3 minutes ago       Running             kindnet-cni               0                   ca0bb4eb3a856       kindnet-rd882
	c90c0cf2d2e8d       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                      3 minutes ago       Running             kube-proxy                0                   6de94aa7ba9e1       kube-proxy-hdxxn
	6b6a81f4f6b23       ghcr.io/kube-vip/kube-vip@sha256:4f256554a83a6d824ea9c5307450a2c3fd132e09c52b339326f94fefaf67155c     3 minutes ago       Running             kube-vip                  0                   fba7b712cd4d4       kube-vip-ha-984158
	ccf53f9534beb       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                      3 minutes ago       Running             kube-controller-manager   0                   15b128d3c6aed       kube-controller-manager-ha-984158
	01cd32d6daeeb       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                      3 minutes ago       Running             kube-scheduler            0                   d854ebb188beb       kube-scheduler-ha-984158
	fda65fdd5e2b8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      3 minutes ago       Running             etcd                      0                   9e61b75f9a67d       etcd-ha-984158
	8ed4a5888320b       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                      3 minutes ago       Running             kube-apiserver            0                   f7a2c4489feba       kube-apiserver-ha-984158
	
	
	==> coredns [7df7251c318624785e44160ab98a256321ca02663ac3f38b31058625169e65cf] <==
	[INFO] 10.244.1.2:34043 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.006963816s
	[INFO] 10.244.1.2:38425 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000137951s
	[INFO] 10.244.2.2:51391 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.0001353s
	[INFO] 10.244.2.2:50788 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010898214s
	[INFO] 10.244.2.2:57984 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165964s
	[INFO] 10.244.2.2:46802 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00010628s
	[INFO] 10.244.2.2:56859 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000133945s
	[INFO] 10.244.0.4:44778 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000139187s
	[INFO] 10.244.0.4:52371 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000149879s
	[INFO] 10.244.0.4:44391 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00012178s
	[INFO] 10.244.0.4:42322 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000090724s
	[INFO] 10.244.1.2:47486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000152861s
	[INFO] 10.244.1.2:33837 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000197948s
	[INFO] 10.244.2.2:57569 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000187028s
	[INFO] 10.244.2.2:49299 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000201838s
	[INFO] 10.244.2.2:56021 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000115909s
	[INFO] 10.244.0.4:58940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136946s
	[INFO] 10.244.0.4:36648 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000142402s
	[INFO] 10.244.1.2:54958 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000137478s
	[INFO] 10.244.1.2:49367 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000111679s
	[INFO] 10.244.2.2:37477 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000176669s
	[INFO] 10.244.2.2:37006 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000082361s
	[INFO] 10.244.0.4:52297 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000131909s
	[INFO] 10.244.0.4:59935 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000069811s
	[INFO] 10.244.0.4:50031 - 5 "PTR IN 1.49.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000124505s
	
	
	==> coredns [ea03ecb87a050996a4161f541de14fb0c989a8a56d2e60b1e5fba8b6e17b480a] <==
	[INFO] 10.244.2.2:33714 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159773s
	[INFO] 10.244.2.2:40292 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00009881s
	[INFO] 10.244.2.2:39630 - 6 "PTR IN 90.167.197.15.in-addr.arpa. udp 44 false 512" NOERROR qr,rd,ra 126 0.000811472s
	[INFO] 10.244.0.4:43002 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000112134s
	[INFO] 10.244.0.4:40782 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 89 0.000094347s
	[INFO] 10.244.1.2:36510 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.033427373s
	[INFO] 10.244.1.2:41816 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000158466s
	[INFO] 10.244.1.2:43260 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000193529s
	[INFO] 10.244.2.2:48795 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000161887s
	[INFO] 10.244.2.2:46683 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000133363s
	[INFO] 10.244.2.2:56162 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000135264s
	[INFO] 10.244.0.4:60293 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000085933s
	[INFO] 10.244.0.4:50296 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.010728706s
	[INFO] 10.244.0.4:42098 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170789s
	[INFO] 10.244.0.4:50435 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000154329s
	[INFO] 10.244.1.2:49298 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000184582s
	[INFO] 10.244.1.2:58606 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000110603s
	[INFO] 10.244.2.2:33122 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000186581s
	[INFO] 10.244.0.4:51847 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000155018s
	[INFO] 10.244.0.4:49360 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091433s
	[INFO] 10.244.1.2:44523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000150525s
	[INFO] 10.244.1.2:48087 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000154066s
	[INFO] 10.244.2.2:47219 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000124336s
	[INFO] 10.244.2.2:58889 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000148273s
	[INFO] 10.244.0.4:47101 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000088754s
	
	
	==> describe nodes <==
	Name:               ha-984158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:33:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:46 +0000   Fri, 19 Sep 2025 22:33:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-984158
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 39160f7d8b9f44c18aede41e4d267fbd
	  System UUID:                e5418393-d7bf-429a-8ff0-9daee26920dd
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rnjl7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 coredns-66bc5c9577-5gnbx             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m41s
	  kube-system                 coredns-66bc5c9577-ltjmz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m41s
	  kube-system                 etcd-ha-984158                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m44s
	  kube-system                 kindnet-rd882                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m42s
	  kube-system                 kube-apiserver-ha-984158             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 kube-controller-manager-ha-984158    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m44s
	  kube-system                 kube-proxy-hdxxn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m42s
	  kube-system                 kube-scheduler-ha-984158             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m47s
	  kube-system                 kube-vip-ha-984158                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m49s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m40s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    3m52s (x8 over 3m52s)  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m52s (x8 over 3m52s)  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m52s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m52s (x8 over 3m52s)  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  Starting                 3m45s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m44s                  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m44s                  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m44s                  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m43s                  node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  NodeReady                3m29s                  kubelet          Node ha-984158 status is now: NodeReady
	  Normal  RegisteredNode           3m15s                  node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           2m34s                  node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           59s                    node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	
	
	Name:               ha-984158-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:36:26 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:36:26 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:36:26 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:36:26 +0000   Fri, 19 Sep 2025 22:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-984158-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 1e2e54308a1d487a97e6122d65ee2eab
	  System UUID:                370c0cbf-a33c-464e-aad2-0ef3d76b4ebb
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8s7jn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 etcd-ha-984158-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         3m12s
	  kube-system                 kindnet-th979                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m14s
	  kube-system                 kube-apiserver-ha-984158-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         3m12s
	  kube-system                 kube-controller-manager-ha-984158-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         3m12s
	  kube-system                 kube-proxy-plrn2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m14s
	  kube-system                 kube-scheduler-ha-984158-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         3m12s
	  kube-system                 kube-vip-ha-984158-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 3m10s              kube-proxy       
	  Normal  RegisteredNode           3m13s              node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           3m10s              node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           2m34s              node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  Starting                 64s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  64s (x8 over 64s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    64s (x8 over 64s)  kubelet          Node ha-984158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     64s (x8 over 64s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           59s                node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	
	
	Name:               ha-984158-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:37:25 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:35:42 +0000   Fri, 19 Sep 2025 22:35:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-984158-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 038f6eff3d614d78917c49afbf40a4e7
	  System UUID:                a60f86ef-6d86-4217-85ca-ad02416ddc34
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c7qf4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m16s
	  kube-system                 etcd-ha-984158-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m35s
	  kube-system                 kindnet-269nt                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m37s
	  kube-system                 kube-apiserver-ha-984158-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-controller-manager-ha-984158-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-proxy-k2drm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m37s
	  kube-system                 kube-scheduler-ha-984158-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m35s
	  kube-system                 kube-vip-ha-984158-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m35s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        2m33s  kube-proxy       
	  Normal  RegisteredNode  2m35s  node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode  2m34s  node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode  2m32s  node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode  59s    node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	
	
	==> dmesg <==
	[  +0.103037] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029723] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.096733] kauditd_printk_skb: 47 callbacks suppressed
	[Sep19 22:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.041768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.022949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023825] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	
	
	==> etcd [fda65fdd5e2b890fe6940cd0f6b5afae54775a44a1e30b23dc514a1ea4a5dd4c] <==
	{"level":"warn","ts":"2025-09-19T22:36:23.387033Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:23.435892Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:23.536502Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:23.543571Z","caller":"etcdserver/cluster_util.go:259","msg":"failed to reach the peer URL","address":"https://192.168.49.3:2380/version","remote-member-id":"63b66b54cc365658","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: i/o timeout"}
	{"level":"warn","ts":"2025-09-19T22:36:23.543624Z","caller":"etcdserver/cluster_util.go:160","msg":"failed to get version","remote-member-id":"63b66b54cc365658","error":"Get \"https://192.168.49.3:2380/version\": dial tcp 192.168.49.3:2380: i/o timeout"}
	{"level":"warn","ts":"2025-09-19T22:36:23.545883Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:23.636132Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:23.681763Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:23.736512Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:23.745977Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:23.833367Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:23.836610Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:23.936255Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:23.978534Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:24.035872Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:24.136484Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:24.183730Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:36:24.236218Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"info","ts":"2025-09-19T22:36:25.015457Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"63b66b54cc365658","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-19T22:36:25.015504Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"63b66b54cc365658"}
	{"level":"info","ts":"2025-09-19T22:36:25.015541Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658"}
	{"level":"info","ts":"2025-09-19T22:36:25.015472Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"63b66b54cc365658","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:36:25.015589Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658"}
	{"level":"info","ts":"2025-09-19T22:36:25.044938Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658"}
	{"level":"info","ts":"2025-09-19T22:36:25.057660Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"63b66b54cc365658"}
	
	
	==> kernel <==
	 22:37:28 up  1:19,  0 users,  load average: 0.84, 0.71, 0.51
	Linux ha-984158 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [66e8ff6b4b2da8ea01c46a247aa4714a90f2ed1d2ba051443dc7790f7f9aa6d2] <==
	I0919 22:36:38.710617       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:36:48.710292       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:36:48.710332       1 main.go:301] handling current node
	I0919 22:36:48.710366       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:36:48.710376       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:36:48.710574       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:36:48.710589       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:36:58.718057       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:36:58.718133       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:36:58.718363       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:36:58.718377       1 main.go:301] handling current node
	I0919 22:36:58.718389       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:36:58.718393       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:08.718031       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:08.718070       1 main.go:301] handling current node
	I0919 22:37:08.718091       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:08.718154       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:08.718379       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:08.718392       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:37:18.713239       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:37:18.713277       1 main.go:301] handling current node
	I0919 22:37:18.713297       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:37:18.713303       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:37:18.713509       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:37:18.713520       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8ed4a5888320b17174d5fd3227517f4c634bc157381bb9771474bfa5169aab2f] <==
	I0919 22:33:46.743338       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:33:46.796068       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:33:46.799874       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:34:55.461764       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:35:00.508368       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	E0919 22:35:16.679730       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50288: use of closed network connection
	E0919 22:35:16.855038       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50310: use of closed network connection
	E0919 22:35:17.030728       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50338: use of closed network connection
	E0919 22:35:17.243171       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50346: use of closed network connection
	E0919 22:35:17.421526       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50372: use of closed network connection
	E0919 22:35:17.591329       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50402: use of closed network connection
	E0919 22:35:17.761924       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50422: use of closed network connection
	E0919 22:35:17.931932       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50438: use of closed network connection
	E0919 22:35:18.091452       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50456: use of closed network connection
	E0919 22:35:18.368592       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50480: use of closed network connection
	E0919 22:35:18.524781       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50484: use of closed network connection
	E0919 22:35:18.691736       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50510: use of closed network connection
	E0919 22:35:18.869219       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50534: use of closed network connection
	E0919 22:35:19.030842       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50552: use of closed network connection
	E0919 22:35:19.201169       1 conn.go:339] Error on socket receive: read tcp 192.168.49.254:8443->192.168.49.1:50566: use of closed network connection
	I0919 22:36:01.868494       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:36:02.874315       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 22:36:20.677007       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2 192.168.49.4]
	I0919 22:37:06.733069       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:37:18.252163       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [ccf53f9534beb8a8c8742cb5e71e0540bfd9bc439877b525756c21d5eef9b422] <==
	I0919 22:33:45.991296       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 22:33:45.991359       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:33:45.991661       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I0919 22:33:45.992619       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 22:33:45.992661       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:33:45.992715       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:33:45.992824       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:33:45.992860       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I0919 22:33:45.992945       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158"
	I0919 22:33:45.992988       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0919 22:33:45.994081       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I0919 22:33:45.994164       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 22:33:45.997463       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:33:46.000645       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I0919 22:33:46.007588       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 22:33:46.014824       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 22:33:46.019019       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:34:00.995932       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	E0919 22:34:13.994601       1 certificate_controller.go:151] "Unhandled Error" err="Sync csr-f5gnl failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io \"csr-f5gnl\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0919 22:34:14.552916       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-984158-m02\" does not exist"
	I0919 22:34:14.582362       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-984158-m02" podCIDRs=["10.244.1.0/24"]
	I0919 22:34:15.998546       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m02"
	I0919 22:34:51.526332       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-984158-m03\" does not exist"
	I0919 22:34:51.541723       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="ha-984158-m03" podCIDRs=["10.244.2.0/24"]
	I0919 22:34:56.108424       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m03"
	
	
	==> kube-proxy [c90c0cf2d2e8d28017db69b5b6570bb146918d86f62813e08b6cf30633aabf39] <==
	I0919 22:33:48.275684       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:33:48.343595       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:33:48.444904       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:33:48.444958       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:33:48.445144       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:33:48.471588       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:33:48.471666       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:33:48.477726       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:33:48.478178       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:33:48.478219       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:33:48.480033       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:33:48.480053       1 config.go:200] "Starting service config controller"
	I0919 22:33:48.480068       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:33:48.480085       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:33:48.482031       1 config.go:309] "Starting node config controller"
	I0919 22:33:48.482049       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:33:48.482057       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:33:48.480508       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:33:48.482857       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:33:48.580234       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:33:48.582666       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 22:33:48.583733       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [01cd32d6daeeb8f86625ec5d90712811aa7cc0b7dee503e21a57e8bd093892cc] <==
	E0919 22:33:39.908093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E0919 22:33:39.911081       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E0919 22:33:39.988409       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E0919 22:33:40.028297       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E0919 22:33:40.063508       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E0919 22:33:40.098835       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E0919 22:33:40.219678       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E0919 22:33:40.224737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E0919 22:33:40.235874       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E0919 22:33:40.301093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	I0919 22:33:42.406311       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0919 22:34:14.584511       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-plrn2\": pod kube-proxy-plrn2 is already assigned to node \"ha-984158-m02\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-plrn2" node="ha-984158-m02"
	E0919 22:34:14.584664       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-plrn2\": pod kube-proxy-plrn2 is already assigned to node \"ha-984158-m02\"" logger="UnhandledError" pod="kube-system/kube-proxy-plrn2"
	E0919 22:34:51.565644       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-k2drm\": pod kube-proxy-k2drm is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-k2drm" node="ha-984158-m03"
	E0919 22:34:51.565863       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 040bf3f7-8d97-4799-b3a2-12b57eec38ef(kube-system/kube-proxy-k2drm) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-k2drm"
	E0919 22:34:51.565922       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-k2drm\": pod kube-proxy-k2drm is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-k2drm"
	E0919 22:34:51.565851       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-tqv25\": pod kube-proxy-tqv25 is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-tqv25" node="ha-984158-m03"
	E0919 22:34:51.565999       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 6db503ca-eaf1-4ffc-8418-f778e65529c9(kube-system/kube-proxy-tqv25) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kube-proxy-tqv25"
	E0919 22:34:51.565619       1 framework.go:1400] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-gtv88\": pod kindnet-gtv88 is already assigned to node \"ha-984158-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-gtv88" node="ha-984158-m03"
	E0919 22:34:51.566066       1 schedule_one.go:379] "scheduler cache ForgetPod failed" err="pod 2040513e-991f-4c82-9b69-1e3fa299841a(kube-system/kindnet-gtv88) wasn't assumed so cannot be forgotten" logger="UnhandledError" pod="kube-system/kindnet-gtv88"
	E0919 22:34:51.568208       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-tqv25\": pod kube-proxy-tqv25 is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kube-proxy-tqv25"
	I0919 22:34:51.568393       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-tqv25" node="ha-984158-m03"
	I0919 22:34:51.568363       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-k2drm" node="ha-984158-m03"
	E0919 22:34:51.568334       1 schedule_one.go:1079] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-gtv88\": pod kindnet-gtv88 is already assigned to node \"ha-984158-m03\"" logger="UnhandledError" pod="kube-system/kindnet-gtv88"
	I0919 22:34:51.574210       1 schedule_one.go:1092] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-gtv88" node="ha-984158-m03"
	
	
	==> kubelet <==
	Sep 19 22:35:23 ha-984158 kubelet[1691]: E0919 22:35:23.937554    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321323937255941  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:33 ha-984158 kubelet[1691]: E0919 22:35:33.938855    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321333938596677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:33 ha-984158 kubelet[1691]: E0919 22:35:33.938899    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321333938596677  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:43 ha-984158 kubelet[1691]: E0919 22:35:43.940553    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321343940230113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:43 ha-984158 kubelet[1691]: E0919 22:35:43.940595    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321343940230113  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:53 ha-984158 kubelet[1691]: E0919 22:35:53.942304    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321353941911906  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:35:53 ha-984158 kubelet[1691]: E0919 22:35:53.942351    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321353941911906  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:03 ha-984158 kubelet[1691]: E0919 22:36:03.943680    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321363943336068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:03 ha-984158 kubelet[1691]: E0919 22:36:03.943728    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321363943336068  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:13 ha-984158 kubelet[1691]: E0919 22:36:13.944965    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321373944715242  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:13 ha-984158 kubelet[1691]: E0919 22:36:13.945002    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321373944715242  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:23 ha-984158 kubelet[1691]: E0919 22:36:23.946622    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321383946409607  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:23 ha-984158 kubelet[1691]: E0919 22:36:23.946661    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321383946409607  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:33 ha-984158 kubelet[1691]: E0919 22:36:33.948757    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321393948572057  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:33 ha-984158 kubelet[1691]: E0919 22:36:33.948793    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321393948572057  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:43 ha-984158 kubelet[1691]: E0919 22:36:43.950070    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321403949809476  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:43 ha-984158 kubelet[1691]: E0919 22:36:43.950150    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321403949809476  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:53 ha-984158 kubelet[1691]: E0919 22:36:53.951489    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321413951213559  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:36:53 ha-984158 kubelet[1691]: E0919 22:36:53.951523    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321413951213559  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:37:03 ha-984158 kubelet[1691]: E0919 22:37:03.953403    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321423953139834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:37:03 ha-984158 kubelet[1691]: E0919 22:37:03.953445    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321423953139834  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:37:13 ha-984158 kubelet[1691]: E0919 22:37:13.955301    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321433955000157  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:37:13 ha-984158 kubelet[1691]: E0919 22:37:13.955340    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321433955000157  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:37:23 ha-984158 kubelet[1691]: E0919 22:37:23.956555    1691 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321443956309195  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:37:23 ha-984158 kubelet[1691]: E0919 22:37:23.956591    1691 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321443956309195  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-984158 -n ha-984158
helpers_test.go:269: (dbg) Run:  kubectl --context ha-984158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (65.93s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (476.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 stop --alsologtostderr -v 5
E0919 22:37:58.616274   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:37:58.622716   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:37:58.634258   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:37:58.655689   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:37:58.697189   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:37:58.778668   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:37:58.942225   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:37:59.263964   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:37:59.906305   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:38:01.188337   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:38:03.750876   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:38:08.872476   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:38:15.398625   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:38:19.113945   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 stop --alsologtostderr -v 5: (50.019470595s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 start --wait true --alsologtostderr -v 5
E0919 22:38:39.595333   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:39:20.557187   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:40:42.478758   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:41:52.326222   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:42:58.611767   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:43:26.320266   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 start --wait true --alsologtostderr -v 5: exit status 80 (7m4.485003481s)

                                                
                                                
-- stdout --
	* [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Enabled addons: 
	
	* Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-984158-m03" control-plane node in "ha-984158" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2,192.168.49.3
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	  - env NO_PROXY=192.168.49.2
	  - env NO_PROXY=192.168.49.2,192.168.49.3
	* Verifying Kubernetes components...
	
	* Starting "ha-984158-m04" worker node in "ha-984158" cluster
	* Pulling base image v0.0.48 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:38:20.249865   95759 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:20.249988   95759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:20.249994   95759 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:20.250000   95759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:20.250249   95759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:38:20.250707   95759 out.go:368] Setting JSON to false
	I0919 22:38:20.251700   95759 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4850,"bootTime":1758316650,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:38:20.251800   95759 start.go:140] virtualization: kvm guest
	I0919 22:38:20.254109   95759 out.go:179] * [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:38:20.255764   95759 notify.go:220] Checking for updates...
	I0919 22:38:20.255845   95759 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:38:20.257481   95759 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:38:20.259062   95759 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:20.260518   95759 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:38:20.262187   95759 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:38:20.263765   95759 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:38:20.265783   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:20.265907   95759 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:38:20.294398   95759 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:38:20.294613   95759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:20.361388   95759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:38:20.349869718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:20.361497   95759 docker.go:318] overlay module found
	I0919 22:38:20.363722   95759 out.go:179] * Using the docker driver based on existing profile
	I0919 22:38:20.365305   95759 start.go:304] selected driver: docker
	I0919 22:38:20.365327   95759 start.go:918] validating driver "docker" against &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:20.365467   95759 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:38:20.365552   95759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:20.420337   95759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:38:20.409819419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:20.420989   95759 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:38:20.421017   95759 cni.go:84] Creating CNI manager for ""
	I0919 22:38:20.421096   95759 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:38:20.421172   95759 start.go:348] cluster config:
	{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:20.423543   95759 out.go:179] * Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	I0919 22:38:20.425622   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:38:20.427928   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:38:20.429486   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:20.429552   95759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:38:20.429561   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:38:20.429624   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:38:20.429683   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:38:20.429696   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:38:20.429903   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:20.451753   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:38:20.451777   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:38:20.451800   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:38:20.451830   95759 start.go:360] acquireMachinesLock for ha-984158: {Name:mkc72a6d4fef468a73a10e88f019b77c34dadd97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:38:20.451903   95759 start.go:364] duration metric: took 52.261µs to acquireMachinesLock for "ha-984158"
	I0919 22:38:20.451929   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:38:20.451935   95759 fix.go:54] fixHost starting: 
	I0919 22:38:20.452267   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:38:20.470646   95759 fix.go:112] recreateIfNeeded on ha-984158: state=Stopped err=<nil>
	W0919 22:38:20.470675   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:38:20.473543   95759 out.go:252] * Restarting existing docker container for "ha-984158" ...
	I0919 22:38:20.473635   95759 cli_runner.go:164] Run: docker start ha-984158
	I0919 22:38:20.725924   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:38:20.747322   95759 kic.go:430] container "ha-984158" state is running.
	I0919 22:38:20.748445   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:20.768582   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:20.768847   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:38:20.768938   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:20.788669   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:20.788894   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:20.788907   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:38:20.789621   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46262->127.0.0.1:32813: read: connection reset by peer
	I0919 22:38:23.928529   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:38:23.928563   95759 ubuntu.go:182] provisioning hostname "ha-984158"
	I0919 22:38:23.928620   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:23.947237   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:23.947447   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:23.947461   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158 && echo "ha-984158" | sudo tee /etc/hostname
	I0919 22:38:24.095390   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:38:24.095477   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.113617   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:24.113853   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:24.113878   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:38:24.249977   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:38:24.250008   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:38:24.250048   95759 ubuntu.go:190] setting up certificates
	I0919 22:38:24.250058   95759 provision.go:84] configureAuth start
	I0919 22:38:24.250116   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:24.268530   95759 provision.go:143] copyHostCerts
	I0919 22:38:24.268578   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:24.268614   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:38:24.268624   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:24.268699   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:38:24.268797   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:24.268816   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:38:24.268820   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:24.268848   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:38:24.268908   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:24.268928   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:38:24.268932   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:24.268959   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:38:24.269015   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158 san=[127.0.0.1 192.168.49.2 ha-984158 localhost minikube]
	I0919 22:38:24.530322   95759 provision.go:177] copyRemoteCerts
	I0919 22:38:24.530388   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:38:24.530429   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.549937   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:24.649314   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:38:24.649386   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:38:24.674567   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:38:24.674639   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:38:24.700190   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:38:24.700255   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:38:24.725998   95759 provision.go:87] duration metric: took 475.930644ms to configureAuth
	I0919 22:38:24.726025   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:38:24.726265   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:24.726378   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.744668   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:24.744868   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:24.744887   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:38:25.041744   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:38:25.041773   95759 machine.go:96] duration metric: took 4.2729084s to provisionDockerMachine
	I0919 22:38:25.041790   95759 start.go:293] postStartSetup for "ha-984158" (driver="docker")
	I0919 22:38:25.041804   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:38:25.041885   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:38:25.041937   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.061613   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.158944   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:38:25.162445   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:38:25.162473   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:38:25.162481   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:38:25.162487   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:38:25.162497   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:38:25.162543   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:38:25.162612   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:38:25.162622   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:38:25.162697   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:38:25.171420   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:25.196548   95759 start.go:296] duration metric: took 154.74522ms for postStartSetup
	I0919 22:38:25.196622   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:25.196658   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.214818   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.307266   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:38:25.311757   95759 fix.go:56] duration metric: took 4.859817354s for fixHost
	I0919 22:38:25.311786   95759 start.go:83] releasing machines lock for "ha-984158", held for 4.859867111s
	I0919 22:38:25.311855   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:25.331292   95759 ssh_runner.go:195] Run: cat /version.json
	I0919 22:38:25.331342   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.331445   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:38:25.331519   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.350964   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.351259   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.521285   95759 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:25.525969   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:38:25.668131   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:38:25.673196   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:25.683302   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:38:25.683463   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:25.693199   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:38:25.693229   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:38:25.693261   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:38:25.693301   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:38:25.705935   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:38:25.717521   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:38:25.717575   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:38:25.730590   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:38:25.742679   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:38:25.806884   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:38:25.876321   95759 docker.go:234] disabling docker service ...
	I0919 22:38:25.876399   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:38:25.889742   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:38:25.902299   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:38:25.968552   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:38:26.035171   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:38:26.047090   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:38:26.063771   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:38:26.063823   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.074242   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:38:26.074296   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.085364   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.096159   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.106569   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:38:26.116384   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.127163   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.138533   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.149140   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:38:26.157845   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:38:26.166573   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:26.230447   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:38:26.333573   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:38:26.333644   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:38:26.337977   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:38:26.338040   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:38:26.341911   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:38:26.375206   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:38:26.375273   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:38:26.410086   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:38:26.448363   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:38:26.449629   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:38:26.467494   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:38:26.471488   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:38:26.484310   95759 kubeadm.go:875] updating cluster {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:38:26.484505   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:26.484557   95759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:38:26.531218   95759 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:38:26.531242   95759 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:38:26.531296   95759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:38:26.567181   95759 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:38:26.567205   95759 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:38:26.567217   95759 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:38:26.567354   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:38:26.567443   95759 ssh_runner.go:195] Run: crio config
	I0919 22:38:26.612533   95759 cni.go:84] Creating CNI manager for ""
	I0919 22:38:26.612558   95759 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:38:26.612573   95759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:38:26.612596   95759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-984158 NodeName:ha-984158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:38:26.612731   95759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-984158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:38:26.612751   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:38:26.612791   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:38:26.625916   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:26.626026   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:38:26.626083   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:38:26.636322   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:38:26.636382   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:38:26.645958   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0919 22:38:26.665184   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:38:26.684627   95759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0919 22:38:26.703734   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:38:26.722194   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:38:26.726033   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:38:26.737748   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:26.802332   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:38:26.828015   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.2
	I0919 22:38:26.828140   95759 certs.go:194] generating shared ca certs ...
	I0919 22:38:26.828156   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:26.828370   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:38:26.828426   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:38:26.828439   95759 certs.go:256] generating profile certs ...
	I0919 22:38:26.828533   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:38:26.828559   95759 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24
	I0919 22:38:26.828573   95759 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:38:27.179556   95759 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 ...
	I0919 22:38:27.179596   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24: {Name:mk0ca61656ed051ffa5dbf8b847da7c47b965f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.179810   95759 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24 ...
	I0919 22:38:27.179828   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24: {Name:mk16b6aae6417eca80799eff0a4c27dc0860bcd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.179937   95759 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:38:27.180098   95759 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:38:27.180260   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:38:27.180276   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:38:27.180289   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:38:27.180307   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:38:27.180321   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:38:27.180334   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:38:27.180354   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:38:27.180364   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:38:27.180373   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:38:27.180419   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:38:27.180445   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:38:27.180454   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:38:27.180474   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:38:27.180497   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:38:27.180517   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:38:27.180557   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:27.180607   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.180624   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.180637   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.181195   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:38:27.209358   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:38:27.235624   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:38:27.260629   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:38:27.286335   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:38:27.312745   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:38:27.340226   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:38:27.366125   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:38:27.395452   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:38:27.424801   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:38:27.463750   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:38:27.502091   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:38:27.530600   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:38:27.538166   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:38:27.552357   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.559014   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.559181   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.569405   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:38:27.582829   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:38:27.597217   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.602410   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.602472   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.610784   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:38:27.624272   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:38:27.635899   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.640089   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.640162   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.647669   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:38:27.657702   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:38:27.661673   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:38:27.669449   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:38:27.676756   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:38:27.683701   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:38:27.690945   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:38:27.698327   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:38:27.705328   95759 kubeadm.go:392] StartCluster: {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:27.705437   95759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:38:27.705491   95759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:38:27.743232   95759 cri.go:89] found id: "55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645"
	I0919 22:38:27.743258   95759 cri.go:89] found id: "79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9"
	I0919 22:38:27.743263   95759 cri.go:89] found id: "32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3"
	I0919 22:38:27.743269   95759 cri.go:89] found id: "935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba"
	I0919 22:38:27.743273   95759 cri.go:89] found id: "13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87"
	I0919 22:38:27.743277   95759 cri.go:89] found id: ""
	I0919 22:38:27.743327   95759 ssh_runner.go:195] Run: sudo runc list -f json
	I0919 22:38:27.766931   95759 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87","pid":859,"status":"running","bundle":"/run/containers/storage/overlay-containers/13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87/userdata","rootfs":"/var/lib/containers/storage/overlay/442db62cd7567e3c806501d825c6c5d23003b614741e7fbf0e795a362ea67a21/merged","created":"2025-09-19T22:38:27.457722678Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"n
ame\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.401544575Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b69a60c29223d
c4628f1e45acc16ccdb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-984158_b69a60c29223dc4628f1e45acc16ccdb/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/442db62cd7567e3c806501d825c6c5d23003b614741e7fbf0e795a362ea67a21/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0fb5a565c96e537910c2f0be84cba5e78d505d3fc126b65c22ff047a404b942a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0fb5a565c96e537910c2f0be84cba5e78d505d3fc126b65c22ff047a404b942a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"
/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/containers/etcd/ee72b99d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b69a60c29223dc4628f1e45acc16ccdb","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"b69a60c29223dc4628f1e45acc16ccdb","kub
ernetes.io/config.seen":"2025-09-19T22:38:26.901880352Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3","pid":878,"status":"running","bundle":"/run/containers/storage/overlay-containers/32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3/userdata","rootfs":"/var/lib/containers/storage/overlay/72e57a2592f75caf73cfa22398d5c5c23f84604ab07514c7bceaf51f91d603f5/merged","created":"2025-09-19T22:38:27.465010624Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMe
ssagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.416092699Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17c8e4bb
866faa0106347d8b7bccd341\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-vip-ha-984158_17c8e4bb866faa0106347d8b7bccd341/kube-vip/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72e57a2592f75caf73cfa22398d5c5c23f84604ab07514c7bceaf51f91d603f5/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/01eeb16fe8f462df27f16cc298e1b9267fc8916156571e710626134b712b0cbe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"01eeb16fe8f462df27f16cc298e1b9267fc8916156571e710626134b712b0cbe","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"cont
ainer_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/17c8e4bb866faa0106347d8b7bccd341/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/17c8e4bb866faa0106347d8b7bccd341/containers/kube-vip/a6d77d36\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.hash":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.seen":"2025-09-19T22:38:26.901891443Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd
.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645","pid":954,"status":"running","bundle":"/run/containers/storage/overlay-containers/55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645/userdata","rootfs":"/var/lib/containers/storage/overlay/118384c8d6dc773d29b1dc159de9c9ee23b8eaeb8bcc8413b688fa07b21abc09/merged","created":"2025-09-19T22:38:27.515032823Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.
hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.443516596Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-98415
8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a8e2ca3a88a914207b16de44248445e2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-984158_a8e2ca3a88a914207b16de44248445e2/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/118384c8d6dc773d29b1dc159de9c9ee23b8eaeb8bcc8413b688fa07b21abc09/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0d488246e5b370f4828f5c11e5390777cc4cb5ea84090c958d6b601b35235de5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0d488246e5b370f4828f5c11e5390777cc4cb5ea84090c958d6b601b35235de5","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kuberne
tes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/containers/kube-apiserver/d0001fc3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"hos
t_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a8e2ca3a88a914207b16de44248445e2","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"a8e2ca3a88a914207b16de44248445e2","kubernetes.io/config.seen":"2025-09-19T22:38:26.901886915Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79c74b643f5a5959b25d582e997875f3399705b
3da970e161badc0d1521410a9","pid":921,"status":"running","bundle":"/run/containers/storage/overlay-containers/79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9/userdata","rootfs":"/var/lib/containers/storage/overlay/fc06cd1000c85e9cd4673a36b81650123792de7d25d573330b62dfab20204623/merged","created":"2025-09-19T22:38:27.502254065Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.ku
bernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.438041518Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17a21a02ffe1f8dd7b43dae71452cdad\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-
scheduler-ha-984158_17a21a02ffe1f8dd7b43dae71452cdad/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fc06cd1000c85e9cd4673a36b81650123792de7d25d573330b62dfab20204623/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8f2d6202aa772c3f9122a164a8b2d4d7ee64338d9bc1d0ea92d9989d81da3a27/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8f2d6202aa772c3f9122a164a8b2d4d7ee64338d9bc1d0ea92d9989d81da3a27","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\"
:\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/containers/kube-scheduler/6dc9da94\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.hash":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.seen":"2025-09-19T22:38:26.901890185Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDepen
dencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba","pid":903,"status":"running","bundle":"/run/containers/storage/overlay-containers/935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba/userdata","rootfs":"/var/lib/containers/storage/overlay/294f08962cf3b85109646e67c49c8e611f769c418e606db4b191cb3508ca3407/merged","created":"2025-09-19T22:38:27.483620953Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7e
aa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.414415487Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controlle
r-manager-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"560e6b05a580a11369967b27d393af16\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-984158_560e6b05a580a11369967b27d393af16/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/294f08962cf3b85109646e67c49c8e611f769c418e606db4b191cb3508ca3407/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-984158_kube-system_560e6b05a580a11369967b27d393af16_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8871adc8c975575b11386f10c2278ccafbe420230c4e6fe1c76b13467b620c80/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8871adc8c975575b11386f10c2278ccafbe420230c4e6fe1c76b13467b620c80","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-984158_kube-system_560e6b05a580a113699
67b27d393af16_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/containers/kube-controller-manager/e63161fc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonl
y\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"560e6b05a580a11369967b27d393af16","kubernetes.io/config.hash":"560e6b05a580a11369967b27d393af16",
"kubernetes.io/config.seen":"2025-09-19T22:38:26.901888813Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0919 22:38:27.767290   95759 cri.go:126] list returned 5 containers
	I0919 22:38:27.767310   95759 cri.go:129] container: {ID:13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87 Status:running}
	I0919 22:38:27.767328   95759 cri.go:135] skipping {13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87 running}: state = "running", want "paused"
	I0919 22:38:27.767344   95759 cri.go:129] container: {ID:32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3 Status:running}
	I0919 22:38:27.767353   95759 cri.go:135] skipping {32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3 running}: state = "running", want "paused"
	I0919 22:38:27.767369   95759 cri.go:129] container: {ID:55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645 Status:running}
	I0919 22:38:27.767378   95759 cri.go:135] skipping {55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645 running}: state = "running", want "paused"
	I0919 22:38:27.767384   95759 cri.go:129] container: {ID:79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9 Status:running}
	I0919 22:38:27.767393   95759 cri.go:135] skipping {79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9 running}: state = "running", want "paused"
	I0919 22:38:27.767399   95759 cri.go:129] container: {ID:935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba Status:running}
	I0919 22:38:27.767405   95759 cri.go:135] skipping {935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba running}: state = "running", want "paused"
	I0919 22:38:27.767454   95759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:38:27.777467   95759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:38:27.777485   95759 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:38:27.777529   95759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:38:27.786748   95759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:27.787254   95759 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-984158" does not appear in /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:27.787385   95759 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14668/kubeconfig needs updating (will repair): [kubeconfig missing "ha-984158" cluster setting kubeconfig missing "ha-984158" context setting]
	I0919 22:38:27.787739   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.788395   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:38:27.788915   95759 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:38:27.788933   95759 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:38:27.788940   95759 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:38:27.788945   95759 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:38:27.788950   95759 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:38:27.788983   95759 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:38:27.789419   95759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:38:27.799384   95759 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:38:27.799408   95759 kubeadm.go:593] duration metric: took 21.916898ms to restartPrimaryControlPlane
	I0919 22:38:27.799419   95759 kubeadm.go:394] duration metric: took 94.114072ms to StartCluster
	I0919 22:38:27.799438   95759 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.799508   95759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:27.800283   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.800531   95759 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:38:27.800560   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:38:27.800569   95759 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:38:27.800796   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:27.803656   95759 out.go:179] * Enabled addons: 
	I0919 22:38:27.804977   95759 addons.go:514] duration metric: took 4.403593ms for enable addons: enabled=[]
	I0919 22:38:27.805014   95759 start.go:246] waiting for cluster config update ...
	I0919 22:38:27.805026   95759 start.go:255] writing updated cluster config ...
	I0919 22:38:27.806661   95759 out.go:203] 
	I0919 22:38:27.808147   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:27.808240   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:27.809900   95759 out.go:179] * Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	I0919 22:38:27.811058   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:38:27.812367   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:38:27.813643   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:27.813670   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:38:27.813747   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:38:27.813763   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:38:27.813745   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:38:27.813880   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:27.838519   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:38:27.838542   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:38:27.838565   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:38:27.838595   95759 start.go:360] acquireMachinesLock for ha-984158-m02: {Name:mk33ccd18791cf0a87d18f7af68677fa10224c04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:38:27.838659   95759 start.go:364] duration metric: took 44.758µs to acquireMachinesLock for "ha-984158-m02"
	I0919 22:38:27.838683   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:38:27.838692   95759 fix.go:54] fixHost starting: m02
	I0919 22:38:27.838992   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:38:27.861121   95759 fix.go:112] recreateIfNeeded on ha-984158-m02: state=Stopped err=<nil>
	W0919 22:38:27.861152   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:38:27.863184   95759 out.go:252] * Restarting existing docker container for "ha-984158-m02" ...
	I0919 22:38:27.863257   95759 cli_runner.go:164] Run: docker start ha-984158-m02
	I0919 22:38:28.125822   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:38:28.146346   95759 kic.go:430] container "ha-984158-m02" state is running.
	I0919 22:38:28.146733   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:28.168173   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:28.168475   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:38:28.168559   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:28.189073   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:28.189415   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:28.189432   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:38:28.190241   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45924->127.0.0.1:32818: read: connection reset by peer
	I0919 22:38:31.326317   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:38:31.326343   95759 ubuntu.go:182] provisioning hostname "ha-984158-m02"
	I0919 22:38:31.326396   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.346064   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:31.346303   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:31.346317   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m02 && echo "ha-984158-m02" | sudo tee /etc/hostname
	I0919 22:38:31.495830   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:38:31.495906   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.515009   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:31.515247   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:31.515266   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:38:31.654008   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:38:31.654036   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:38:31.654057   95759 ubuntu.go:190] setting up certificates
	I0919 22:38:31.654067   95759 provision.go:84] configureAuth start
	I0919 22:38:31.654148   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:31.672869   95759 provision.go:143] copyHostCerts
	I0919 22:38:31.672912   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:31.672970   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:38:31.672984   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:31.673073   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:38:31.673199   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:31.673230   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:38:31.673241   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:31.673286   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:38:31.673375   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:31.673403   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:38:31.673410   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:31.673450   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:38:31.673525   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m02 san=[127.0.0.1 192.168.49.3 ha-984158-m02 localhost minikube]
	I0919 22:38:31.832848   95759 provision.go:177] copyRemoteCerts
	I0919 22:38:31.832920   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:38:31.832966   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.850721   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:31.949325   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:38:31.949391   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:38:31.976597   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:38:31.976650   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:38:32.002584   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:38:32.002653   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:38:32.035331   95759 provision.go:87] duration metric: took 381.249624ms to configureAuth
	I0919 22:38:32.035366   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:38:32.035610   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:32.035718   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.058439   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:32.058702   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:32.058739   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:38:32.484521   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:38:32.484550   95759 machine.go:96] duration metric: took 4.316059426s to provisionDockerMachine
	I0919 22:38:32.484563   95759 start.go:293] postStartSetup for "ha-984158-m02" (driver="docker")
	I0919 22:38:32.484576   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:38:32.484635   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:38:32.484697   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.510926   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.619996   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:38:32.629566   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:38:32.629676   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:38:32.629727   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:38:32.629764   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:38:32.629806   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:38:32.629922   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:38:32.630086   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:38:32.630147   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:38:32.630353   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:38:32.645004   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:32.675202   95759 start.go:296] duration metric: took 190.622889ms for postStartSetup
	I0919 22:38:32.675288   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:32.675327   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.697580   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.795763   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:38:32.801249   95759 fix.go:56] duration metric: took 4.962547133s for fixHost
	I0919 22:38:32.801275   95759 start.go:83] releasing machines lock for "ha-984158-m02", held for 4.962602853s
	I0919 22:38:32.801364   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:32.827878   95759 out.go:179] * Found network options:
	I0919 22:38:32.829587   95759 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:38:32.830969   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:38:32.831030   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:38:32.831146   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:38:32.831196   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.831204   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:38:32.831253   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.853448   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.853718   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:33.150612   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:38:33.160301   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:33.176730   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:38:33.176815   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:33.191328   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:38:33.191364   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:38:33.191416   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:38:33.191485   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:38:33.213815   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:38:33.231542   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:38:33.231635   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:38:33.247095   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:38:33.260329   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:38:33.380840   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:38:33.498308   95759 docker.go:234] disabling docker service ...
	I0919 22:38:33.498382   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:38:33.517853   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:38:33.536133   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:38:33.652463   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:38:33.761899   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:38:33.774677   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:38:33.793915   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:38:33.793969   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.804996   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:38:33.805057   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.816056   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.827802   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.840124   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:38:33.850301   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.861287   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.871826   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.883496   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:38:33.893950   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:38:33.906440   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:34.043971   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:39:04.326209   95759 ssh_runner.go:235] Completed: sudo systemctl restart crio: (30.282202499s)
	I0919 22:39:04.326243   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:39:04.326297   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:39:04.330226   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:39:04.330288   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:39:04.334075   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:39:04.369702   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:39:04.369800   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:04.406718   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:04.445793   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:39:04.446931   95759 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:39:04.448076   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:39:04.466313   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:39:04.470940   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:04.487515   95759 mustload.go:65] Loading cluster: ha-984158
	I0919 22:39:04.487734   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:04.487986   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:39:04.509829   95759 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:39:04.510158   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.3
	I0919 22:39:04.510174   95759 certs.go:194] generating shared ca certs ...
	I0919 22:39:04.510188   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:39:04.510345   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:39:04.510395   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:39:04.510409   95759 certs.go:256] generating profile certs ...
	I0919 22:39:04.510508   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:39:04.510584   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.84abfbbb
	I0919 22:39:04.510636   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:39:04.510651   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:39:04.510678   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:39:04.510696   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:39:04.510717   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:39:04.510733   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:39:04.510752   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:39:04.510781   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:39:04.510806   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:39:04.510875   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:39:04.510915   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:39:04.510928   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:39:04.510960   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:39:04.510988   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:39:04.511020   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:39:04.511077   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:04.511136   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:04.511156   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:39:04.511176   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:39:04.511229   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:39:04.532173   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:39:04.620518   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:39:04.624965   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:39:04.638633   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:39:04.642459   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:39:04.656462   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:39:04.660491   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:39:04.673947   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:39:04.678496   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:39:04.694022   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:39:04.698129   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:39:04.711457   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:39:04.715160   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:39:04.729617   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:39:04.756565   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:39:04.783062   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:39:04.808557   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:39:04.834684   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:39:04.860337   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:39:04.887473   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:39:04.913478   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:39:04.941337   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:39:04.967151   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:39:04.994669   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:39:05.028238   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:39:05.050978   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:39:05.073833   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:39:05.097285   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:39:05.120404   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:39:05.142847   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:39:05.163160   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:39:05.184053   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:39:05.190286   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:39:05.200925   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.204978   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.205054   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.211914   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:39:05.222874   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:39:05.234900   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.238900   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.238947   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.246276   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:39:05.255894   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:39:05.266269   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.270313   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.270382   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.278196   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:39:05.287746   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:39:05.291476   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:39:05.298503   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:39:05.305486   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:39:05.312720   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:39:05.319784   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:39:05.327527   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:39:05.334693   95759 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0919 22:39:05.334792   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:39:05.334818   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:39:05.334851   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:39:05.347510   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:05.347572   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:39:05.347618   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:39:05.356984   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:39:05.357056   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:39:05.367597   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:39:05.387861   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:39:05.406815   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:39:05.427878   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:39:05.432487   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:05.444804   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:05.548051   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:05.560978   95759 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:39:05.561299   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:05.563075   95759 out.go:179] * Verifying Kubernetes components...
	I0919 22:39:05.564716   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:05.672434   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:05.689063   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:39:05.689191   95759 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:39:05.689392   95759 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m02" to be "Ready" ...
	I0919 22:39:05.698088   95759 node_ready.go:49] node "ha-984158-m02" is "Ready"
	I0919 22:39:05.698164   95759 node_ready.go:38] duration metric: took 8.753764ms for node "ha-984158-m02" to be "Ready" ...
	I0919 22:39:05.698182   95759 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:39:05.698299   95759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:05.711300   95759 api_server.go:72] duration metric: took 150.274321ms to wait for apiserver process to appear ...
	I0919 22:39:05.711326   95759 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:39:05.711345   95759 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:39:05.716499   95759 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:39:05.717555   95759 api_server.go:141] control plane version: v1.34.0
	I0919 22:39:05.717586   95759 api_server.go:131] duration metric: took 6.25291ms to wait for apiserver health ...
	I0919 22:39:05.717595   95759 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:39:05.724069   95759 system_pods.go:59] 24 kube-system pods found
	I0919 22:39:05.724156   95759 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.724172   95759 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.724180   95759 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:05.724186   95759 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:05.724191   95759 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:39:05.724196   95759 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:05.724201   95759 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:05.724210   95759 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:39:05.724219   95759 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:05.724226   95759 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:05.724233   95759 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:39:05.724241   95759 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:05.724248   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:05.724256   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:39:05.724262   95759 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:05.724268   95759 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:05.724277   95759 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:39:05.724285   95759 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:05.724293   95759 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:05.724298   95759 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:39:05.724303   95759 system_pods.go:61] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:05.724308   95759 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:05.724317   95759 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:05.724325   95759 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:39:05.724338   95759 system_pods.go:74] duration metric: took 6.735402ms to wait for pod list to return data ...
	I0919 22:39:05.724355   95759 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:39:05.728216   95759 default_sa.go:45] found service account: "default"
	I0919 22:39:05.728243   95759 default_sa.go:55] duration metric: took 3.879783ms for default service account to be created ...
	I0919 22:39:05.728256   95759 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:39:05.733903   95759 system_pods.go:86] 24 kube-system pods found
	I0919 22:39:05.733937   95759 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.733945   95759 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.733951   95759 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:05.733954   95759 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:05.733958   95759 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:39:05.733961   95759 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:05.733964   95759 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:05.733969   95759 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:39:05.733973   95759 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:05.733976   95759 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:05.733979   95759 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:39:05.733982   95759 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:05.733986   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:05.733990   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:39:05.733993   95759 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:05.733995   95759 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:05.733999   95759 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:39:05.734007   95759 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:05.734010   95759 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:05.734013   95759 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:39:05.734016   95759 system_pods.go:89] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:05.734019   95759 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:05.734022   95759 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:05.734025   95759 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:39:05.734035   95759 system_pods.go:126] duration metric: took 5.77298ms to wait for k8s-apps to be running ...
	I0919 22:39:05.734044   95759 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:39:05.734085   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:05.746589   95759 system_svc.go:56] duration metric: took 12.533548ms WaitForService to wait for kubelet
	I0919 22:39:05.746629   95759 kubeadm.go:578] duration metric: took 185.605298ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:39:05.746655   95759 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:39:05.750196   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750221   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750233   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750236   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750240   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750242   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750246   95759 node_conditions.go:105] duration metric: took 3.586256ms to run NodePressure ...
	I0919 22:39:05.750259   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:39:05.750286   95759 start.go:255] writing updated cluster config ...
	I0919 22:39:05.752610   95759 out.go:203] 
	I0919 22:39:05.754285   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:05.754392   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:05.756186   95759 out.go:179] * Starting "ha-984158-m03" control-plane node in "ha-984158" cluster
	I0919 22:39:05.757628   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:39:05.758862   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:39:05.760172   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:39:05.760197   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:39:05.760252   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:39:05.760314   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:39:05.760332   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:39:05.760441   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:05.782434   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:39:05.782456   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:39:05.782471   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:39:05.782504   95759 start.go:360] acquireMachinesLock for ha-984158-m03: {Name:mkf33267bff56ae1cde0b805408b7f6393558146 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:05.782575   95759 start.go:364] duration metric: took 49.512µs to acquireMachinesLock for "ha-984158-m03"
	I0919 22:39:05.782600   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:05.782610   95759 fix.go:54] fixHost starting: m03
	I0919 22:39:05.782826   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:39:05.800849   95759 fix.go:112] recreateIfNeeded on ha-984158-m03: state=Stopped err=<nil>
	W0919 22:39:05.800880   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:05.803272   95759 out.go:252] * Restarting existing docker container for "ha-984158-m03" ...
	I0919 22:39:05.803361   95759 cli_runner.go:164] Run: docker start ha-984158-m03
	I0919 22:39:06.059506   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:39:06.078641   95759 kic.go:430] container "ha-984158-m03" state is running.
	I0919 22:39:06.079004   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:06.099001   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:06.099262   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:06.099315   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:06.117915   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:06.118166   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:06.118181   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:06.118862   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49366->127.0.0.1:32823: read: connection reset by peer
	I0919 22:39:09.258735   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:39:09.258764   95759 ubuntu.go:182] provisioning hostname "ha-984158-m03"
	I0919 22:39:09.258824   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.277807   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:09.278027   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:09.278041   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m03 && echo "ha-984158-m03" | sudo tee /etc/hostname
	I0919 22:39:09.428956   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:39:09.429040   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.447284   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:09.447535   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:09.447560   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:39:09.593539   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:39:09.593573   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:39:09.593598   95759 ubuntu.go:190] setting up certificates
	I0919 22:39:09.593609   95759 provision.go:84] configureAuth start
	I0919 22:39:09.593674   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:09.617495   95759 provision.go:143] copyHostCerts
	I0919 22:39:09.617537   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:39:09.617594   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:39:09.617607   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:39:09.617684   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:39:09.617811   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:39:09.617846   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:39:09.617853   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:39:09.618482   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:39:09.618632   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:39:09.618662   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:39:09.618671   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:39:09.618706   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:39:09.618780   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m03 san=[127.0.0.1 192.168.49.4 ha-984158-m03 localhost minikube]
	I0919 22:39:09.838307   95759 provision.go:177] copyRemoteCerts
	I0919 22:39:09.838429   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:39:09.838478   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.863933   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:09.983312   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:39:09.983424   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:39:10.021925   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:39:10.022008   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:39:10.063154   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:39:10.063276   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:39:10.104760   95759 provision.go:87] duration metric: took 511.137266ms to configureAuth
	I0919 22:39:10.104795   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:39:10.105072   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:10.105290   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.130112   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:10.130385   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:10.130414   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:39:10.533816   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:39:10.533844   95759 machine.go:96] duration metric: took 4.434568252s to provisionDockerMachine
	I0919 22:39:10.533858   95759 start.go:293] postStartSetup for "ha-984158-m03" (driver="docker")
	I0919 22:39:10.533871   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:39:10.533932   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:39:10.533966   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.553604   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.653755   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:39:10.657424   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:39:10.657456   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:39:10.657463   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:39:10.657469   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:39:10.657479   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:39:10.657531   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:39:10.657598   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:39:10.657608   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:39:10.657691   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:39:10.667261   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:10.700579   95759 start.go:296] duration metric: took 166.704996ms for postStartSetup
	I0919 22:39:10.700662   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:10.700704   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.728418   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.830886   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:39:10.836158   95759 fix.go:56] duration metric: took 5.053541909s for fixHost
	I0919 22:39:10.836186   95759 start.go:83] releasing machines lock for "ha-984158-m03", held for 5.053597855s
	I0919 22:39:10.836256   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:10.859049   95759 out.go:179] * Found network options:
	I0919 22:39:10.860801   95759 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:39:10.862070   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862112   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862141   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862155   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:39:10.862232   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:39:10.862282   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.862297   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:39:10.862360   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.885568   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.886944   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:11.122339   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:39:11.127789   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:39:11.138248   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:39:11.138341   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:39:11.147671   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:39:11.147698   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:39:11.147735   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:39:11.147774   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:39:11.160936   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:39:11.174826   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:39:11.174888   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:39:11.190348   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:39:11.203116   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:39:11.321919   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:39:11.432545   95759 docker.go:234] disabling docker service ...
	I0919 22:39:11.432608   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:39:11.446263   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:39:11.458056   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:39:11.572334   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:39:11.685921   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:39:11.698336   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:39:11.718031   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:39:11.718164   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.731929   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:39:11.732016   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.743385   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.755175   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.766807   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:39:11.779733   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.791806   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.802833   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.813877   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:39:11.824761   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:39:11.835392   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:11.940776   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:39:12.206168   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:39:12.206252   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:39:12.210177   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:39:12.210235   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:39:12.213924   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:39:12.250824   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:39:12.250899   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:12.288367   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:12.331200   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:39:12.332776   95759 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:39:12.334399   95759 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:39:12.335764   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:39:12.353568   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:39:12.357576   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:12.370671   95759 mustload.go:65] Loading cluster: ha-984158
	I0919 22:39:12.370930   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:12.371317   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:39:12.389760   95759 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:39:12.390003   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.4
	I0919 22:39:12.390016   95759 certs.go:194] generating shared ca certs ...
	I0919 22:39:12.390030   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:39:12.390204   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:39:12.390274   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:39:12.390289   95759 certs.go:256] generating profile certs ...
	I0919 22:39:12.390403   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:39:12.390484   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7
	I0919 22:39:12.390533   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:39:12.390549   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:39:12.390568   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:39:12.390585   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:39:12.390601   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:39:12.390614   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:39:12.390628   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:39:12.390641   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:39:12.390653   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:39:12.390711   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:39:12.390749   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:39:12.390761   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:39:12.390789   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:39:12.390812   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:39:12.390832   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:39:12.390871   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:12.390895   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:12.390910   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:39:12.390923   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:39:12.390971   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:39:12.408363   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:39:12.497500   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:39:12.501626   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:39:12.514736   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:39:12.518842   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:39:12.534226   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:39:12.538486   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:39:12.551906   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:39:12.555555   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:39:12.568778   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:39:12.573237   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:39:12.587524   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:39:12.591646   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:39:12.605021   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:39:12.632905   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:39:12.658562   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:39:12.685222   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:39:12.710986   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:39:12.742821   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:39:12.774649   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:39:12.808068   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:39:12.840999   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:39:12.873033   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:39:12.904176   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:39:12.935469   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:39:12.958451   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:39:12.983716   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:39:13.006372   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:39:13.026634   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:39:13.048003   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:39:13.067093   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:39:13.091242   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:39:13.097309   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:39:13.107657   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.111389   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.111438   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.118417   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:39:13.129698   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:39:13.140452   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.144194   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.144245   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.151266   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:39:13.161188   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:39:13.171891   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.176332   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.176413   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.184138   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:39:13.193625   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:39:13.197577   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:39:13.204628   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:39:13.211553   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:39:13.218449   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:39:13.225712   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:39:13.232770   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:39:13.239778   95759 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0919 22:39:13.239885   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:39:13.239907   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:39:13.239943   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:39:13.252386   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:13.252462   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:39:13.252520   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:39:13.261653   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:39:13.261771   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:39:13.271379   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:39:13.292763   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:39:13.314362   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:39:13.334791   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:39:13.338371   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:13.350977   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:13.456433   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:13.469559   95759 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:39:13.469884   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:13.472456   95759 out.go:179] * Verifying Kubernetes components...
	I0919 22:39:13.474707   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:13.588742   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:13.602600   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:39:13.602666   95759 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:39:13.602869   95759 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m03" to be "Ready" ...
	I0919 22:39:13.605956   95759 node_ready.go:49] node "ha-984158-m03" is "Ready"
	I0919 22:39:13.605979   95759 node_ready.go:38] duration metric: took 3.097172ms for node "ha-984158-m03" to be "Ready" ...
	I0919 22:39:13.605993   95759 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:39:13.606032   95759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:13.618211   95759 api_server.go:72] duration metric: took 148.610181ms to wait for apiserver process to appear ...
	I0919 22:39:13.618235   95759 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:39:13.618251   95759 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:39:13.622760   95759 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:39:13.623811   95759 api_server.go:141] control plane version: v1.34.0
	I0919 22:39:13.623838   95759 api_server.go:131] duration metric: took 5.597306ms to wait for apiserver health ...
	I0919 22:39:13.623847   95759 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:39:13.632153   95759 system_pods.go:59] 24 kube-system pods found
	I0919 22:39:13.632182   95759 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:39:13.632190   95759 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:13.632196   95759 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:13.632200   95759 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:13.632207   95759 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:39:13.632210   95759 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:13.632214   95759 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:13.632216   95759 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:39:13.632219   95759 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:13.632229   95759 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:13.632233   95759 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:39:13.632237   95759 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:13.632241   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:13.632247   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:39:13.632253   95759 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:13.632256   95759 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:13.632259   95759 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:39:13.632261   95759 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:13.632264   95759 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:13.632274   95759 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:39:13.632277   95759 system_pods.go:61] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:13.632282   95759 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:13.632285   95759 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:13.632288   95759 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:39:13.632295   95759 system_pods.go:74] duration metric: took 8.442512ms to wait for pod list to return data ...
	I0919 22:39:13.632305   95759 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:39:13.635316   95759 default_sa.go:45] found service account: "default"
	I0919 22:39:13.635337   95759 default_sa.go:55] duration metric: took 3.026488ms for default service account to be created ...
	I0919 22:39:13.635346   95759 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:39:13.733862   95759 system_pods.go:86] 24 kube-system pods found
	I0919 22:39:13.733908   95759 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:39:13.733922   95759 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:13.733929   95759 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:13.733937   95759 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:13.733945   95759 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:39:13.733952   95759 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:13.733958   95759 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:13.733964   95759 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:39:13.733969   95759 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:13.733974   95759 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:13.733985   95759 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:39:13.733995   95759 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:13.734001   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:13.734013   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:39:13.734018   95759 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:13.734021   95759 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:13.734024   95759 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:39:13.734027   95759 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:13.734033   95759 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:13.734044   95759 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:39:13.734052   95759 system_pods.go:89] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:13.734057   95759 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:13.734065   95759 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:13.734069   95759 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:39:13.734079   95759 system_pods.go:126] duration metric: took 98.726691ms to wait for k8s-apps to be running ...
	I0919 22:39:13.734091   95759 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:39:13.734175   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:13.747528   95759 system_svc.go:56] duration metric: took 13.410723ms WaitForService to wait for kubelet
	I0919 22:39:13.747570   95759 kubeadm.go:578] duration metric: took 277.970313ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:39:13.747595   95759 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:39:13.751576   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751598   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751610   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751613   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751616   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751619   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751622   95759 node_conditions.go:105] duration metric: took 4.023347ms to run NodePressure ...
	I0919 22:39:13.751634   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:39:13.751651   95759 start.go:255] writing updated cluster config ...
	I0919 22:39:13.753417   95759 out.go:203] 
	I0919 22:39:13.755135   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:13.755254   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:13.757081   95759 out.go:179] * Starting "ha-984158-m04" worker node in "ha-984158" cluster
	I0919 22:39:13.758394   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:39:13.759816   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:39:13.761015   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:39:13.761039   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:39:13.761051   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:39:13.761261   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:39:13.761304   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:39:13.761429   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:13.782360   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:39:13.782385   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:39:13.782406   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:39:13.782436   95759 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:13.782501   95759 start.go:364] duration metric: took 44.732µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:39:13.782524   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:13.782534   95759 fix.go:54] fixHost starting: m04
	I0919 22:39:13.782740   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:39:13.801027   95759 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Stopped err=<nil>
	W0919 22:39:13.801060   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:13.802864   95759 out.go:252] * Restarting existing docker container for "ha-984158-m04" ...
	I0919 22:39:13.802931   95759 cli_runner.go:164] Run: docker start ha-984158-m04
	I0919 22:39:14.055762   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:39:14.074848   95759 kic.go:430] container "ha-984158-m04" state is running.
	I0919 22:39:14.075262   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:39:14.094352   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:14.094594   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:14.094647   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:39:14.114064   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:14.114317   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0919 22:39:14.114330   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:14.114961   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50476->127.0.0.1:32828: read: connection reset by peer
	I0919 22:39:17.116460   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:20.118409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:23.120443   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:26.120776   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:29.121743   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:32.123258   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:35.125391   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:38.125915   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:41.126437   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:44.127525   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:47.128400   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:50.130402   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:53.132094   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:56.132448   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:59.133362   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:02.134004   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:05.136365   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:08.136767   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:11.137236   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:14.138295   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:17.139769   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:20.141642   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:23.143546   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:26.143966   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:29.144829   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:32.146423   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:35.148801   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:38.150005   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:41.150409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:44.150842   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:47.152406   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:50.154676   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:53.156471   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:56.157387   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:59.158366   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:02.160382   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:05.162387   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:08.162900   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:11.163385   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:14.164700   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:17.165484   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:20.167366   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:23.169809   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:26.170437   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:29.171409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:32.173443   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:35.175650   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:38.176984   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:41.177465   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:44.179757   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:47.181386   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:50.183757   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:53.185945   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:56.186445   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:59.187353   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:02.189451   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:05.191306   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:08.191935   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:11.192418   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:14.194206   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:42:14.194236   95759 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 22:42:14.194304   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.214461   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.214567   95759 machine.go:96] duration metric: took 3m0.119960942s to provisionDockerMachine
	I0919 22:42:14.214652   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:42:14.214684   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.238129   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.238280   95759 retry.go:31] will retry after 248.39527ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:14.487752   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.507066   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.507179   95759 retry.go:31] will retry after 241.490952ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:14.749696   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.769271   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.769394   95759 retry.go:31] will retry after 573.29064ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.342939   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.361305   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:15.361440   95759 retry.go:31] will retry after 493.546865ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.855177   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.876393   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:42:15.876503   95759 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:15.876520   95759 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.876565   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:42:15.876594   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.896632   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:15.896744   95759 retry.go:31] will retry after 211.367435ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.109288   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:16.130175   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:16.130270   95759 retry.go:31] will retry after 289.868834ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.420891   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:16.442472   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:16.442604   95759 retry.go:31] will retry after 547.590918ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.990359   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:17.008923   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:42:17.009049   95759 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:17.009064   95759 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:17.009073   95759 fix.go:56] duration metric: took 3m3.226540631s for fixHost
	I0919 22:42:17.009081   95759 start.go:83] releasing machines lock for "ha-984158-m04", held for 3m3.226570319s
	W0919 22:42:17.009092   95759 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:17.009191   95759 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:17.009203   95759 start.go:729] Will try again in 5 seconds ...
	I0919 22:42:22.010253   95759 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:42:22.010363   95759 start.go:364] duration metric: took 70.627µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:42:22.010395   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:42:22.010406   95759 fix.go:54] fixHost starting: m04
	I0919 22:42:22.010649   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:42:22.029262   95759 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Stopped err=<nil>
	W0919 22:42:22.029294   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:42:22.031096   95759 out.go:252] * Restarting existing docker container for "ha-984158-m04" ...
	I0919 22:42:22.031220   95759 cli_runner.go:164] Run: docker start ha-984158-m04
	I0919 22:42:22.294621   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:42:22.313475   95759 kic.go:430] container "ha-984158-m04" state is running.
	I0919 22:42:22.313799   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:42:22.333284   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:42:22.333514   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:42:22.333568   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:42:22.353907   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:42:22.354187   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0919 22:42:22.354204   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:42:22.354888   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51412->127.0.0.1:32833: read: connection reset by peer
	I0919 22:42:25.355457   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:28.356034   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:31.356407   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:34.358370   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:37.359693   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:40.360614   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:43.362397   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:46.363784   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:49.364408   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:52.366596   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:55.367888   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:58.369219   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:01.370395   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:04.371156   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:07.372724   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:10.373695   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:13.374908   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:16.375383   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:19.376388   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:22.378537   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:25.379508   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:28.380693   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:31.381372   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:34.383699   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:37.384935   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:40.385685   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:43.388048   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:46.388445   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:49.389657   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:52.391627   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:55.392687   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:58.393125   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:01.393619   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:04.395945   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:07.398372   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:10.398608   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:13.400912   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:16.401401   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:19.402479   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:22.404415   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:25.405562   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:28.406498   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:31.407755   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:34.410076   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:37.412454   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:40.413768   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:43.415168   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:46.416416   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:49.417399   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:52.419643   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:55.420363   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:58.420738   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:01.421609   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:04.423913   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:07.425430   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:10.426778   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:13.428381   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:16.429193   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:19.430490   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:22.432491   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:45:22.432543   95759 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 22:45:22.432609   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.452712   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.452777   95759 machine.go:96] duration metric: took 3m0.119250879s to provisionDockerMachine
	I0919 22:45:22.452858   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:45:22.452892   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.472911   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.473047   95759 retry.go:31] will retry after 202.283506ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:22.676548   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.694834   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.694965   95759 retry.go:31] will retry after 463.907197ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.159340   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.178560   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.178658   95759 retry.go:31] will retry after 365.232594ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.544210   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.564214   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:45:23.564366   95759 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:23.564390   95759 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.564449   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:45:23.564494   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.583703   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.583796   95759 retry.go:31] will retry after 343.872214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.928329   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.946762   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.946864   95759 retry.go:31] will retry after 341.564773ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.289296   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:24.312255   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:24.312369   95759 retry.go:31] will retry after 341.728488ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.655044   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:24.674698   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:45:24.674839   95759 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:24.674858   95759 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.674871   95759 fix.go:56] duration metric: took 3m2.664466794s for fixHost
	I0919 22:45:24.674881   95759 start.go:83] releasing machines lock for "ha-984158-m04", held for 3m2.664502957s
	W0919 22:45:24.674982   95759 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-984158" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	* Failed to start docker container. Running "minikube delete -p ha-984158" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.677468   95759 out.go:203] 
	W0919 22:45:24.678601   95759 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:24.678620   95759 out.go:285] * 
	* 
	W0919 22:45:24.680349   95759 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:45:24.681822   95759 out.go:203] 

                                                
                                                
** /stderr **
ha_test.go:471: failed to run minikube start. args "out/minikube-linux-amd64 -p ha-984158 node list --alsologtostderr -v 5" : exit status 80
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 node list --alsologtostderr -v 5
ha_test.go:481: reported node list is not the same after restart. Before restart: ha-984158	192.168.49.2
ha-984158-m02	192.168.49.3
ha-984158-m03	192.168.49.4
ha-984158-m04	

                                                
                                                
After restart: ha-984158	192.168.49.2
ha-984158-m02	192.168.49.3
ha-984158-m03	192.168.49.4
ha-984158-m04	192.168.49.5
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-984158
helpers_test.go:243: (dbg) docker inspect ha-984158:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	        "Created": "2025-09-19T22:33:24.996172492Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 95956,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:38:20.505682313Z",
	            "FinishedAt": "2025-09-19T22:38:19.832335475Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hosts",
	        "LogPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca-json.log",
	        "Name": "/ha-984158",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-984158:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-984158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	                "LowerDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-984158",
	                "Source": "/var/lib/docker/volumes/ha-984158/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-984158",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-984158",
	                "name.minikube.sigs.k8s.io": "ha-984158",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e4fdcd3468198deb98d4a8f23cbd640a198a460cfea4c64e865edb3f33eaab9",
	            "SandboxKey": "/var/run/docker/netns/8e4fdcd34681",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-984158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:4b:fa:16:2f:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1b6c79ac61dbabfd8f1ce8959ab9a2616212ddaf4680b1bb2cc7b6f6005d0e",
	                    "EndpointID": "b56ee79fb4c604077e565626768d3a9928d875fe4a72dd45dd22369025cf8f31",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-984158",
	                        "0e7c4b5cff2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-984158 -n ha-984158
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 logs -n 25: (1.34206274s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp testdata/cp-test.txt ha-984158-m04:/home/docker/cp-test.txt                                                             │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m04.txt │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m04_ha-984158.txt                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158.txt                                                 │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ node    │ ha-984158 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ node    │ ha-984158 node start m02 --alsologtostderr -v 5                                                                                      │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ node    │ ha-984158 node list --alsologtostderr -v 5                                                                                           │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ stop    │ ha-984158 stop --alsologtostderr -v 5                                                                                                │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:38 UTC │
	│ start   │ ha-984158 start --wait true --alsologtostderr -v 5                                                                                   │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ node    │ ha-984158 node list --alsologtostderr -v 5                                                                                           │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:45 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:38:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:38:20.249865   95759 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:20.249988   95759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:20.249994   95759 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:20.250000   95759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:20.250249   95759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:38:20.250707   95759 out.go:368] Setting JSON to false
	I0919 22:38:20.251700   95759 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4850,"bootTime":1758316650,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:38:20.251800   95759 start.go:140] virtualization: kvm guest
	I0919 22:38:20.254109   95759 out.go:179] * [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:38:20.255764   95759 notify.go:220] Checking for updates...
	I0919 22:38:20.255845   95759 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:38:20.257481   95759 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:38:20.259062   95759 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:20.260518   95759 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:38:20.262187   95759 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:38:20.263765   95759 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:38:20.265783   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:20.265907   95759 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:38:20.294398   95759 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:38:20.294613   95759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:20.361388   95759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:38:20.349869718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:20.361497   95759 docker.go:318] overlay module found
	I0919 22:38:20.363722   95759 out.go:179] * Using the docker driver based on existing profile
	I0919 22:38:20.365305   95759 start.go:304] selected driver: docker
	I0919 22:38:20.365327   95759 start.go:918] validating driver "docker" against &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:20.365467   95759 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:38:20.365552   95759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:20.420337   95759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:38:20.409819419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:20.420989   95759 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:38:20.421017   95759 cni.go:84] Creating CNI manager for ""
	I0919 22:38:20.421096   95759 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:38:20.421172   95759 start.go:348] cluster config:
	{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:20.423543   95759 out.go:179] * Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	I0919 22:38:20.425622   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:38:20.427928   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:38:20.429486   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:20.429552   95759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:38:20.429561   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:38:20.429624   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:38:20.429683   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:38:20.429696   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:38:20.429903   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:20.451753   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:38:20.451777   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:38:20.451800   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:38:20.451830   95759 start.go:360] acquireMachinesLock for ha-984158: {Name:mkc72a6d4fef468a73a10e88f019b77c34dadd97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:38:20.451903   95759 start.go:364] duration metric: took 52.261µs to acquireMachinesLock for "ha-984158"
	I0919 22:38:20.451929   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:38:20.451935   95759 fix.go:54] fixHost starting: 
	I0919 22:38:20.452267   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:38:20.470646   95759 fix.go:112] recreateIfNeeded on ha-984158: state=Stopped err=<nil>
	W0919 22:38:20.470675   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:38:20.473543   95759 out.go:252] * Restarting existing docker container for "ha-984158" ...
	I0919 22:38:20.473635   95759 cli_runner.go:164] Run: docker start ha-984158
	I0919 22:38:20.725924   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:38:20.747322   95759 kic.go:430] container "ha-984158" state is running.
	I0919 22:38:20.748445   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:20.768582   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:20.768847   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:38:20.768938   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:20.788669   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:20.788894   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:20.788907   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:38:20.789621   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46262->127.0.0.1:32813: read: connection reset by peer
	I0919 22:38:23.928529   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:38:23.928563   95759 ubuntu.go:182] provisioning hostname "ha-984158"
	I0919 22:38:23.928620   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:23.947237   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:23.947447   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:23.947461   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158 && echo "ha-984158" | sudo tee /etc/hostname
	I0919 22:38:24.095390   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:38:24.095477   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.113617   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:24.113853   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:24.113878   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:38:24.249977   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:38:24.250008   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:38:24.250048   95759 ubuntu.go:190] setting up certificates
	I0919 22:38:24.250058   95759 provision.go:84] configureAuth start
	I0919 22:38:24.250116   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:24.268530   95759 provision.go:143] copyHostCerts
	I0919 22:38:24.268578   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:24.268614   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:38:24.268624   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:24.268699   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:38:24.268797   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:24.268816   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:38:24.268820   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:24.268848   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:38:24.268908   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:24.268928   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:38:24.268932   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:24.268959   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:38:24.269015   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158 san=[127.0.0.1 192.168.49.2 ha-984158 localhost minikube]
	I0919 22:38:24.530322   95759 provision.go:177] copyRemoteCerts
	I0919 22:38:24.530388   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:38:24.530429   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.549937   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:24.649314   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:38:24.649386   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:38:24.674567   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:38:24.674639   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:38:24.700190   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:38:24.700255   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:38:24.725998   95759 provision.go:87] duration metric: took 475.930644ms to configureAuth
	I0919 22:38:24.726025   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:38:24.726265   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:24.726378   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.744668   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:24.744868   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:24.744887   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:38:25.041744   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:38:25.041773   95759 machine.go:96] duration metric: took 4.2729084s to provisionDockerMachine
	I0919 22:38:25.041790   95759 start.go:293] postStartSetup for "ha-984158" (driver="docker")
	I0919 22:38:25.041804   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:38:25.041885   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:38:25.041937   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.061613   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.158944   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:38:25.162445   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:38:25.162473   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:38:25.162481   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:38:25.162487   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:38:25.162497   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:38:25.162543   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:38:25.162612   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:38:25.162622   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:38:25.162697   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:38:25.171420   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:25.196548   95759 start.go:296] duration metric: took 154.74522ms for postStartSetup
	I0919 22:38:25.196622   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:25.196658   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.214818   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.307266   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:38:25.311757   95759 fix.go:56] duration metric: took 4.859817354s for fixHost
	I0919 22:38:25.311786   95759 start.go:83] releasing machines lock for "ha-984158", held for 4.859867111s
	I0919 22:38:25.311855   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:25.331292   95759 ssh_runner.go:195] Run: cat /version.json
	I0919 22:38:25.331342   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.331445   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:38:25.331519   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.350964   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.351259   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.521285   95759 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:25.525969   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:38:25.668131   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:38:25.673196   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:25.683302   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:38:25.683463   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:25.693199   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:38:25.693229   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:38:25.693261   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:38:25.693301   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:38:25.705935   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:38:25.717521   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:38:25.717575   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:38:25.730590   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:38:25.742679   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:38:25.806884   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:38:25.876321   95759 docker.go:234] disabling docker service ...
	I0919 22:38:25.876399   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:38:25.889742   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:38:25.902299   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:38:25.968552   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:38:26.035171   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:38:26.047090   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:38:26.063771   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:38:26.063823   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.074242   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:38:26.074296   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.085364   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.096159   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.106569   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:38:26.116384   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.127163   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.138533   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.149140   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:38:26.157845   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:38:26.166573   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:26.230447   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:38:26.333573   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:38:26.333644   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:38:26.337977   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:38:26.338040   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:38:26.341911   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:38:26.375206   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:38:26.375273   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:38:26.410086   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:38:26.448363   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:38:26.449629   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:38:26.467494   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:38:26.471488   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:38:26.484310   95759 kubeadm.go:875] updating cluster {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:38:26.484505   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:26.484557   95759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:38:26.531218   95759 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:38:26.531242   95759 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:38:26.531296   95759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:38:26.567181   95759 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:38:26.567205   95759 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:38:26.567217   95759 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:38:26.567354   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:38:26.567443   95759 ssh_runner.go:195] Run: crio config
	I0919 22:38:26.612533   95759 cni.go:84] Creating CNI manager for ""
	I0919 22:38:26.612558   95759 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:38:26.612573   95759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:38:26.612596   95759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-984158 NodeName:ha-984158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:38:26.612731   95759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-984158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:38:26.612751   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:38:26.612791   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:38:26.625916   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:26.626026   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:38:26.626083   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:38:26.636322   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:38:26.636382   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:38:26.645958   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0919 22:38:26.665184   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:38:26.684627   95759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0919 22:38:26.703734   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:38:26.722194   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:38:26.726033   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:38:26.737748   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:26.802332   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:38:26.828015   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.2
	I0919 22:38:26.828140   95759 certs.go:194] generating shared ca certs ...
	I0919 22:38:26.828156   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:26.828370   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:38:26.828426   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:38:26.828439   95759 certs.go:256] generating profile certs ...
	I0919 22:38:26.828533   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:38:26.828559   95759 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24
	I0919 22:38:26.828573   95759 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:38:27.179556   95759 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 ...
	I0919 22:38:27.179596   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24: {Name:mk0ca61656ed051ffa5dbf8b847da7c47b965f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.179810   95759 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24 ...
	I0919 22:38:27.179828   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24: {Name:mk16b6aae6417eca80799eff0a4c27dc0860bcd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.179937   95759 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:38:27.180098   95759 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:38:27.180260   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:38:27.180276   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:38:27.180289   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:38:27.180307   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:38:27.180321   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:38:27.180334   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:38:27.180354   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:38:27.180364   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:38:27.180373   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:38:27.180419   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:38:27.180445   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:38:27.180454   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:38:27.180474   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:38:27.180497   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:38:27.180517   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:38:27.180557   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:27.180607   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.180624   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.180637   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.181195   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:38:27.209358   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:38:27.235624   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:38:27.260629   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:38:27.286335   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:38:27.312745   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:38:27.340226   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:38:27.366125   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:38:27.395452   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:38:27.424801   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:38:27.463750   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:38:27.502091   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:38:27.530600   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:38:27.538166   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:38:27.552357   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.559014   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.559181   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.569405   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:38:27.582829   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:38:27.597217   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.602410   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.602472   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.610784   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:38:27.624272   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:38:27.635899   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.640089   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.640162   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.647669   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:38:27.657702   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:38:27.661673   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:38:27.669449   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:38:27.676756   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:38:27.683701   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:38:27.690945   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:38:27.698327   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:38:27.705328   95759 kubeadm.go:392] StartCluster: {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:27.705437   95759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:38:27.705491   95759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:38:27.743232   95759 cri.go:89] found id: "55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645"
	I0919 22:38:27.743258   95759 cri.go:89] found id: "79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9"
	I0919 22:38:27.743263   95759 cri.go:89] found id: "32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3"
	I0919 22:38:27.743269   95759 cri.go:89] found id: "935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba"
	I0919 22:38:27.743273   95759 cri.go:89] found id: "13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87"
	I0919 22:38:27.743277   95759 cri.go:89] found id: ""
	I0919 22:38:27.743327   95759 ssh_runner.go:195] Run: sudo runc list -f json
	I0919 22:38:27.766931   95759 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87","pid":859,"status":"running","bundle":"/run/containers/storage/overlay-containers/13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87/userdata","rootfs":"/var/lib/containers/storage/overlay/442db62cd7567e3c806501d825c6c5d23003b614741e7fbf0e795a362ea67a21/merged","created":"2025-09-19T22:38:27.457722678Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"n
ame\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.401544575Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b69a60c29223d
c4628f1e45acc16ccdb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-984158_b69a60c29223dc4628f1e45acc16ccdb/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/442db62cd7567e3c806501d825c6c5d23003b614741e7fbf0e795a362ea67a21/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0fb5a565c96e537910c2f0be84cba5e78d505d3fc126b65c22ff047a404b942a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0fb5a565c96e537910c2f0be84cba5e78d505d3fc126b65c22ff047a404b942a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"
/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/containers/etcd/ee72b99d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b69a60c29223dc4628f1e45acc16ccdb","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"b69a60c29223dc4628f1e45acc16ccdb","kub
ernetes.io/config.seen":"2025-09-19T22:38:26.901880352Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3","pid":878,"status":"running","bundle":"/run/containers/storage/overlay-containers/32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3/userdata","rootfs":"/var/lib/containers/storage/overlay/72e57a2592f75caf73cfa22398d5c5c23f84604ab07514c7bceaf51f91d603f5/merged","created":"2025-09-19T22:38:27.465010624Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMe
ssagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.416092699Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17c8e4bb
866faa0106347d8b7bccd341\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-vip-ha-984158_17c8e4bb866faa0106347d8b7bccd341/kube-vip/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72e57a2592f75caf73cfa22398d5c5c23f84604ab07514c7bceaf51f91d603f5/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/01eeb16fe8f462df27f16cc298e1b9267fc8916156571e710626134b712b0cbe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"01eeb16fe8f462df27f16cc298e1b9267fc8916156571e710626134b712b0cbe","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"cont
ainer_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/17c8e4bb866faa0106347d8b7bccd341/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/17c8e4bb866faa0106347d8b7bccd341/containers/kube-vip/a6d77d36\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.hash":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.seen":"2025-09-19T22:38:26.901891443Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd
.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645","pid":954,"status":"running","bundle":"/run/containers/storage/overlay-containers/55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645/userdata","rootfs":"/var/lib/containers/storage/overlay/118384c8d6dc773d29b1dc159de9c9ee23b8eaeb8bcc8413b688fa07b21abc09/merged","created":"2025-09-19T22:38:27.515032823Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.
hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.443516596Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-98415
8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a8e2ca3a88a914207b16de44248445e2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-984158_a8e2ca3a88a914207b16de44248445e2/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/118384c8d6dc773d29b1dc159de9c9ee23b8eaeb8bcc8413b688fa07b21abc09/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0d488246e5b370f4828f5c11e5390777cc4cb5ea84090c958d6b601b35235de5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0d488246e5b370f4828f5c11e5390777cc4cb5ea84090c958d6b601b35235de5","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kuberne
tes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/containers/kube-apiserver/d0001fc3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"hos
t_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a8e2ca3a88a914207b16de44248445e2","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"a8e2ca3a88a914207b16de44248445e2","kubernetes.io/config.seen":"2025-09-19T22:38:26.901886915Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79c74b643f5a5959b25d582e997875f3399705b
3da970e161badc0d1521410a9","pid":921,"status":"running","bundle":"/run/containers/storage/overlay-containers/79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9/userdata","rootfs":"/var/lib/containers/storage/overlay/fc06cd1000c85e9cd4673a36b81650123792de7d25d573330b62dfab20204623/merged","created":"2025-09-19T22:38:27.502254065Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.ku
bernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.438041518Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17a21a02ffe1f8dd7b43dae71452cdad\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-
scheduler-ha-984158_17a21a02ffe1f8dd7b43dae71452cdad/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fc06cd1000c85e9cd4673a36b81650123792de7d25d573330b62dfab20204623/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8f2d6202aa772c3f9122a164a8b2d4d7ee64338d9bc1d0ea92d9989d81da3a27/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8f2d6202aa772c3f9122a164a8b2d4d7ee64338d9bc1d0ea92d9989d81da3a27","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\"
:\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/containers/kube-scheduler/6dc9da94\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.hash":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.seen":"2025-09-19T22:38:26.901890185Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDepen
dencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba","pid":903,"status":"running","bundle":"/run/containers/storage/overlay-containers/935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba/userdata","rootfs":"/var/lib/containers/storage/overlay/294f08962cf3b85109646e67c49c8e611f769c418e606db4b191cb3508ca3407/merged","created":"2025-09-19T22:38:27.483620953Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7e
aa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.414415487Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controlle
r-manager-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"560e6b05a580a11369967b27d393af16\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-984158_560e6b05a580a11369967b27d393af16/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/294f08962cf3b85109646e67c49c8e611f769c418e606db4b191cb3508ca3407/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-984158_kube-system_560e6b05a580a11369967b27d393af16_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8871adc8c975575b11386f10c2278ccafbe420230c4e6fe1c76b13467b620c80/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8871adc8c975575b11386f10c2278ccafbe420230c4e6fe1c76b13467b620c80","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-984158_kube-system_560e6b05a580a113699
67b27d393af16_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/containers/kube-controller-manager/e63161fc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonl
y\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"560e6b05a580a11369967b27d393af16","kubernetes.io/config.hash":"560e6b05a580a11369967b27d393af16",
"kubernetes.io/config.seen":"2025-09-19T22:38:26.901888813Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0919 22:38:27.767290   95759 cri.go:126] list returned 5 containers
	I0919 22:38:27.767310   95759 cri.go:129] container: {ID:13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87 Status:running}
	I0919 22:38:27.767328   95759 cri.go:135] skipping {13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87 running}: state = "running", want "paused"
	I0919 22:38:27.767344   95759 cri.go:129] container: {ID:32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3 Status:running}
	I0919 22:38:27.767353   95759 cri.go:135] skipping {32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3 running}: state = "running", want "paused"
	I0919 22:38:27.767369   95759 cri.go:129] container: {ID:55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645 Status:running}
	I0919 22:38:27.767378   95759 cri.go:135] skipping {55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645 running}: state = "running", want "paused"
	I0919 22:38:27.767384   95759 cri.go:129] container: {ID:79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9 Status:running}
	I0919 22:38:27.767393   95759 cri.go:135] skipping {79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9 running}: state = "running", want "paused"
	I0919 22:38:27.767399   95759 cri.go:129] container: {ID:935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba Status:running}
	I0919 22:38:27.767405   95759 cri.go:135] skipping {935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba running}: state = "running", want "paused"
	I0919 22:38:27.767454   95759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:38:27.777467   95759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:38:27.777485   95759 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:38:27.777529   95759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:38:27.786748   95759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:27.787254   95759 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-984158" does not appear in /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:27.787385   95759 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14668/kubeconfig needs updating (will repair): [kubeconfig missing "ha-984158" cluster setting kubeconfig missing "ha-984158" context setting]
	I0919 22:38:27.787739   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.788395   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:38:27.788915   95759 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:38:27.788933   95759 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:38:27.788940   95759 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:38:27.788945   95759 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:38:27.788950   95759 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:38:27.788983   95759 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:38:27.789419   95759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:38:27.799384   95759 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:38:27.799408   95759 kubeadm.go:593] duration metric: took 21.916898ms to restartPrimaryControlPlane
	I0919 22:38:27.799419   95759 kubeadm.go:394] duration metric: took 94.114072ms to StartCluster
	I0919 22:38:27.799438   95759 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.799508   95759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:27.800283   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.800531   95759 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:38:27.800560   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:38:27.800569   95759 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:38:27.800796   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:27.803656   95759 out.go:179] * Enabled addons: 
	I0919 22:38:27.804977   95759 addons.go:514] duration metric: took 4.403593ms for enable addons: enabled=[]
	I0919 22:38:27.805014   95759 start.go:246] waiting for cluster config update ...
	I0919 22:38:27.805026   95759 start.go:255] writing updated cluster config ...
	I0919 22:38:27.806661   95759 out.go:203] 
	I0919 22:38:27.808147   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:27.808240   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:27.809900   95759 out.go:179] * Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	I0919 22:38:27.811058   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:38:27.812367   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:38:27.813643   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:27.813670   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:38:27.813747   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:38:27.813763   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:38:27.813745   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:38:27.813880   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:27.838519   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:38:27.838542   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:38:27.838565   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:38:27.838595   95759 start.go:360] acquireMachinesLock for ha-984158-m02: {Name:mk33ccd18791cf0a87d18f7af68677fa10224c04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:38:27.838659   95759 start.go:364] duration metric: took 44.758µs to acquireMachinesLock for "ha-984158-m02"
	I0919 22:38:27.838683   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:38:27.838692   95759 fix.go:54] fixHost starting: m02
	I0919 22:38:27.838992   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:38:27.861121   95759 fix.go:112] recreateIfNeeded on ha-984158-m02: state=Stopped err=<nil>
	W0919 22:38:27.861152   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:38:27.863184   95759 out.go:252] * Restarting existing docker container for "ha-984158-m02" ...
	I0919 22:38:27.863257   95759 cli_runner.go:164] Run: docker start ha-984158-m02
	I0919 22:38:28.125822   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:38:28.146346   95759 kic.go:430] container "ha-984158-m02" state is running.
	I0919 22:38:28.146733   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:28.168173   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:28.168475   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:38:28.168559   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:28.189073   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:28.189415   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:28.189432   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:38:28.190241   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45924->127.0.0.1:32818: read: connection reset by peer
	I0919 22:38:31.326317   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:38:31.326343   95759 ubuntu.go:182] provisioning hostname "ha-984158-m02"
	I0919 22:38:31.326396   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.346064   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:31.346303   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:31.346317   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m02 && echo "ha-984158-m02" | sudo tee /etc/hostname
	I0919 22:38:31.495830   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:38:31.495906   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.515009   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:31.515247   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:31.515266   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:38:31.654008   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:38:31.654036   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:38:31.654057   95759 ubuntu.go:190] setting up certificates
	I0919 22:38:31.654067   95759 provision.go:84] configureAuth start
	I0919 22:38:31.654148   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:31.672869   95759 provision.go:143] copyHostCerts
	I0919 22:38:31.672912   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:31.672970   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:38:31.672984   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:31.673073   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:38:31.673199   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:31.673230   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:38:31.673241   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:31.673286   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:38:31.673375   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:31.673403   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:38:31.673410   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:31.673450   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:38:31.673525   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m02 san=[127.0.0.1 192.168.49.3 ha-984158-m02 localhost minikube]
	I0919 22:38:31.832848   95759 provision.go:177] copyRemoteCerts
	I0919 22:38:31.832920   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:38:31.832966   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.850721   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:31.949325   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:38:31.949391   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:38:31.976597   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:38:31.976650   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:38:32.002584   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:38:32.002653   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:38:32.035331   95759 provision.go:87] duration metric: took 381.249624ms to configureAuth
	I0919 22:38:32.035366   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:38:32.035610   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:32.035718   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.058439   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:32.058702   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:32.058739   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:38:32.484521   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:38:32.484550   95759 machine.go:96] duration metric: took 4.316059426s to provisionDockerMachine
	I0919 22:38:32.484563   95759 start.go:293] postStartSetup for "ha-984158-m02" (driver="docker")
	I0919 22:38:32.484576   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:38:32.484635   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:38:32.484697   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.510926   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.619996   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:38:32.629566   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:38:32.629676   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:38:32.629727   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:38:32.629764   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:38:32.629806   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:38:32.629922   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:38:32.630086   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:38:32.630147   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:38:32.630353   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:38:32.645004   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:32.675202   95759 start.go:296] duration metric: took 190.622889ms for postStartSetup
	I0919 22:38:32.675288   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:32.675327   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.697580   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.795763   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:38:32.801249   95759 fix.go:56] duration metric: took 4.962547133s for fixHost
	I0919 22:38:32.801275   95759 start.go:83] releasing machines lock for "ha-984158-m02", held for 4.962602853s
	I0919 22:38:32.801364   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:32.827878   95759 out.go:179] * Found network options:
	I0919 22:38:32.829587   95759 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:38:32.830969   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:38:32.831030   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:38:32.831146   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:38:32.831196   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.831204   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:38:32.831253   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.853448   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.853718   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:33.150612   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:38:33.160301   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:33.176730   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:38:33.176815   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:33.191328   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:38:33.191364   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:38:33.191416   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:38:33.191485   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:38:33.213815   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:38:33.231542   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:38:33.231635   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:38:33.247095   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:38:33.260329   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:38:33.380840   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:38:33.498308   95759 docker.go:234] disabling docker service ...
	I0919 22:38:33.498382   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:38:33.517853   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:38:33.536133   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:38:33.652463   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:38:33.761899   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:38:33.774677   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:38:33.793915   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:38:33.793969   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.804996   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:38:33.805057   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.816056   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.827802   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.840124   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:38:33.850301   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.861287   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.871826   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.883496   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:38:33.893950   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:38:33.906440   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:34.043971   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:39:04.326209   95759 ssh_runner.go:235] Completed: sudo systemctl restart crio: (30.282202499s)
	I0919 22:39:04.326243   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:39:04.326297   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:39:04.330226   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:39:04.330288   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:39:04.334075   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:39:04.369702   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:39:04.369800   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:04.406718   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:04.445793   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:39:04.446931   95759 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:39:04.448076   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:39:04.466313   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:39:04.470940   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:04.487515   95759 mustload.go:65] Loading cluster: ha-984158
	I0919 22:39:04.487734   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:04.487986   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:39:04.509829   95759 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:39:04.510158   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.3
	I0919 22:39:04.510174   95759 certs.go:194] generating shared ca certs ...
	I0919 22:39:04.510188   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:39:04.510345   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:39:04.510395   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:39:04.510409   95759 certs.go:256] generating profile certs ...
	I0919 22:39:04.510508   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:39:04.510584   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.84abfbbb
	I0919 22:39:04.510636   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:39:04.510651   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:39:04.510678   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:39:04.510696   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:39:04.510717   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:39:04.510733   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:39:04.510752   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:39:04.510781   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:39:04.510806   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:39:04.510875   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:39:04.510915   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:39:04.510928   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:39:04.510960   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:39:04.510988   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:39:04.511020   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:39:04.511077   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:04.511136   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:04.511156   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:39:04.511176   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:39:04.511229   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:39:04.532173   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:39:04.620518   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:39:04.624965   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:39:04.638633   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:39:04.642459   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:39:04.656462   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:39:04.660491   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:39:04.673947   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:39:04.678496   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:39:04.694022   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:39:04.698129   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:39:04.711457   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:39:04.715160   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:39:04.729617   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:39:04.756565   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:39:04.783062   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:39:04.808557   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:39:04.834684   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:39:04.860337   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:39:04.887473   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:39:04.913478   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:39:04.941337   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:39:04.967151   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:39:04.994669   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:39:05.028238   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:39:05.050978   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:39:05.073833   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:39:05.097285   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:39:05.120404   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:39:05.142847   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:39:05.163160   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:39:05.184053   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:39:05.190286   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:39:05.200925   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.204978   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.205054   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.211914   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:39:05.222874   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:39:05.234900   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.238900   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.238947   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.246276   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:39:05.255894   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:39:05.266269   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.270313   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.270382   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.278196   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:39:05.287746   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:39:05.291476   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:39:05.298503   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:39:05.305486   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:39:05.312720   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:39:05.319784   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:39:05.327527   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:39:05.334693   95759 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0919 22:39:05.334792   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:39:05.334818   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:39:05.334851   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:39:05.347510   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:05.347572   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:39:05.347618   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:39:05.356984   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:39:05.357056   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:39:05.367597   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:39:05.387861   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:39:05.406815   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:39:05.427878   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:39:05.432487   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:05.444804   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:05.548051   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:05.560978   95759 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:39:05.561299   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:05.563075   95759 out.go:179] * Verifying Kubernetes components...
	I0919 22:39:05.564716   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:05.672434   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:05.689063   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:39:05.689191   95759 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:39:05.689392   95759 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m02" to be "Ready" ...
	I0919 22:39:05.698088   95759 node_ready.go:49] node "ha-984158-m02" is "Ready"
	I0919 22:39:05.698164   95759 node_ready.go:38] duration metric: took 8.753764ms for node "ha-984158-m02" to be "Ready" ...
	I0919 22:39:05.698182   95759 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:39:05.698299   95759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:05.711300   95759 api_server.go:72] duration metric: took 150.274321ms to wait for apiserver process to appear ...
	I0919 22:39:05.711326   95759 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:39:05.711345   95759 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:39:05.716499   95759 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:39:05.717555   95759 api_server.go:141] control plane version: v1.34.0
	I0919 22:39:05.717586   95759 api_server.go:131] duration metric: took 6.25291ms to wait for apiserver health ...
	I0919 22:39:05.717595   95759 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:39:05.724069   95759 system_pods.go:59] 24 kube-system pods found
	I0919 22:39:05.724156   95759 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.724172   95759 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.724180   95759 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:05.724186   95759 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:05.724191   95759 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:39:05.724196   95759 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:05.724201   95759 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:05.724210   95759 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:39:05.724219   95759 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:05.724226   95759 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:05.724233   95759 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:39:05.724241   95759 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:05.724248   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:05.724256   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:39:05.724262   95759 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:05.724268   95759 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:05.724277   95759 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:39:05.724285   95759 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:05.724293   95759 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:05.724298   95759 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:39:05.724303   95759 system_pods.go:61] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:05.724308   95759 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:05.724317   95759 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:05.724325   95759 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:39:05.724338   95759 system_pods.go:74] duration metric: took 6.735402ms to wait for pod list to return data ...
	I0919 22:39:05.724355   95759 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:39:05.728216   95759 default_sa.go:45] found service account: "default"
	I0919 22:39:05.728243   95759 default_sa.go:55] duration metric: took 3.879783ms for default service account to be created ...
	I0919 22:39:05.728256   95759 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:39:05.733903   95759 system_pods.go:86] 24 kube-system pods found
	I0919 22:39:05.733937   95759 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.733945   95759 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.733951   95759 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:05.733954   95759 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:05.733958   95759 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:39:05.733961   95759 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:05.733964   95759 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:05.733969   95759 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:39:05.733973   95759 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:05.733976   95759 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:05.733979   95759 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:39:05.733982   95759 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:05.733986   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:05.733990   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:39:05.733993   95759 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:05.733995   95759 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:05.733999   95759 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:39:05.734007   95759 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:05.734010   95759 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:05.734013   95759 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:39:05.734016   95759 system_pods.go:89] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:05.734019   95759 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:05.734022   95759 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:05.734025   95759 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:39:05.734035   95759 system_pods.go:126] duration metric: took 5.77298ms to wait for k8s-apps to be running ...
	I0919 22:39:05.734044   95759 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:39:05.734085   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:05.746589   95759 system_svc.go:56] duration metric: took 12.533548ms WaitForService to wait for kubelet
	I0919 22:39:05.746629   95759 kubeadm.go:578] duration metric: took 185.605298ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:39:05.746655   95759 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:39:05.750196   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750221   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750233   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750236   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750240   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750242   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750246   95759 node_conditions.go:105] duration metric: took 3.586256ms to run NodePressure ...
	I0919 22:39:05.750259   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:39:05.750286   95759 start.go:255] writing updated cluster config ...
	I0919 22:39:05.752610   95759 out.go:203] 
	I0919 22:39:05.754285   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:05.754392   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:05.756186   95759 out.go:179] * Starting "ha-984158-m03" control-plane node in "ha-984158" cluster
	I0919 22:39:05.757628   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:39:05.758862   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:39:05.760172   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:39:05.760197   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:39:05.760252   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:39:05.760314   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:39:05.760332   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:39:05.760441   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:05.782434   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:39:05.782456   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:39:05.782471   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:39:05.782504   95759 start.go:360] acquireMachinesLock for ha-984158-m03: {Name:mkf33267bff56ae1cde0b805408b7f6393558146 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:05.782575   95759 start.go:364] duration metric: took 49.512µs to acquireMachinesLock for "ha-984158-m03"
	I0919 22:39:05.782600   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:05.782610   95759 fix.go:54] fixHost starting: m03
	I0919 22:39:05.782826   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:39:05.800849   95759 fix.go:112] recreateIfNeeded on ha-984158-m03: state=Stopped err=<nil>
	W0919 22:39:05.800880   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:05.803272   95759 out.go:252] * Restarting existing docker container for "ha-984158-m03" ...
	I0919 22:39:05.803361   95759 cli_runner.go:164] Run: docker start ha-984158-m03
	I0919 22:39:06.059506   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:39:06.078641   95759 kic.go:430] container "ha-984158-m03" state is running.
	I0919 22:39:06.079004   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:06.099001   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:06.099262   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:06.099315   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:06.117915   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:06.118166   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:06.118181   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:06.118862   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49366->127.0.0.1:32823: read: connection reset by peer
	I0919 22:39:09.258735   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:39:09.258764   95759 ubuntu.go:182] provisioning hostname "ha-984158-m03"
	I0919 22:39:09.258824   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.277807   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:09.278027   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:09.278041   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m03 && echo "ha-984158-m03" | sudo tee /etc/hostname
	I0919 22:39:09.428956   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:39:09.429040   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.447284   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:09.447535   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:09.447560   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:39:09.593539   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:39:09.593573   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:39:09.593598   95759 ubuntu.go:190] setting up certificates
	I0919 22:39:09.593609   95759 provision.go:84] configureAuth start
	I0919 22:39:09.593674   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:09.617495   95759 provision.go:143] copyHostCerts
	I0919 22:39:09.617537   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:39:09.617594   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:39:09.617607   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:39:09.617684   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:39:09.617811   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:39:09.617846   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:39:09.617853   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:39:09.618482   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:39:09.618632   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:39:09.618662   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:39:09.618671   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:39:09.618706   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:39:09.618780   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m03 san=[127.0.0.1 192.168.49.4 ha-984158-m03 localhost minikube]
	I0919 22:39:09.838307   95759 provision.go:177] copyRemoteCerts
	I0919 22:39:09.838429   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:39:09.838478   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.863933   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:09.983312   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:39:09.983424   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:39:10.021925   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:39:10.022008   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:39:10.063154   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:39:10.063276   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:39:10.104760   95759 provision.go:87] duration metric: took 511.137266ms to configureAuth
	I0919 22:39:10.104795   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:39:10.105072   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:10.105290   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.130112   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:10.130385   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:10.130414   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:39:10.533816   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:39:10.533844   95759 machine.go:96] duration metric: took 4.434568252s to provisionDockerMachine
	I0919 22:39:10.533858   95759 start.go:293] postStartSetup for "ha-984158-m03" (driver="docker")
	I0919 22:39:10.533871   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:39:10.533932   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:39:10.533966   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.553604   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.653755   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:39:10.657424   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:39:10.657456   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:39:10.657463   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:39:10.657469   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:39:10.657479   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:39:10.657531   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:39:10.657598   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:39:10.657608   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:39:10.657691   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:39:10.667261   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:10.700579   95759 start.go:296] duration metric: took 166.704996ms for postStartSetup
	I0919 22:39:10.700662   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:10.700704   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.728418   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.830886   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:39:10.836158   95759 fix.go:56] duration metric: took 5.053541909s for fixHost
	I0919 22:39:10.836186   95759 start.go:83] releasing machines lock for "ha-984158-m03", held for 5.053597855s
	I0919 22:39:10.836256   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:10.859049   95759 out.go:179] * Found network options:
	I0919 22:39:10.860801   95759 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:39:10.862070   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862112   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862141   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862155   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:39:10.862232   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:39:10.862282   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.862297   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:39:10.862360   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.885568   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.886944   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:11.122339   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:39:11.127789   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:39:11.138248   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:39:11.138341   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:39:11.147671   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:39:11.147698   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:39:11.147735   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:39:11.147774   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:39:11.160936   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:39:11.174826   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:39:11.174888   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:39:11.190348   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:39:11.203116   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:39:11.321919   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:39:11.432545   95759 docker.go:234] disabling docker service ...
	I0919 22:39:11.432608   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:39:11.446263   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:39:11.458056   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:39:11.572334   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:39:11.685921   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:39:11.698336   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:39:11.718031   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:39:11.718164   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.731929   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:39:11.732016   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.743385   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.755175   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.766807   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:39:11.779733   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.791806   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.802833   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.813877   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:39:11.824761   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:39:11.835392   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:11.940776   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:39:12.206168   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:39:12.206252   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:39:12.210177   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:39:12.210235   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:39:12.213924   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:39:12.250824   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:39:12.250899   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:12.288367   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:12.331200   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:39:12.332776   95759 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:39:12.334399   95759 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:39:12.335764   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:39:12.353568   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:39:12.357576   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:12.370671   95759 mustload.go:65] Loading cluster: ha-984158
	I0919 22:39:12.370930   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:12.371317   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:39:12.389760   95759 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:39:12.390003   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.4
	I0919 22:39:12.390016   95759 certs.go:194] generating shared ca certs ...
	I0919 22:39:12.390030   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:39:12.390204   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:39:12.390274   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:39:12.390289   95759 certs.go:256] generating profile certs ...
	I0919 22:39:12.390403   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:39:12.390484   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7
	I0919 22:39:12.390533   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:39:12.390549   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:39:12.390568   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:39:12.390585   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:39:12.390601   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:39:12.390614   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:39:12.390628   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:39:12.390641   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:39:12.390653   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:39:12.390711   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:39:12.390749   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:39:12.390761   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:39:12.390789   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:39:12.390812   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:39:12.390832   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:39:12.390871   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:12.390895   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:12.390910   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:39:12.390923   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:39:12.390971   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:39:12.408363   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:39:12.497500   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:39:12.501626   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:39:12.514736   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:39:12.518842   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:39:12.534226   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:39:12.538486   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:39:12.551906   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:39:12.555555   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:39:12.568778   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:39:12.573237   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:39:12.587524   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:39:12.591646   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:39:12.605021   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:39:12.632905   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:39:12.658562   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:39:12.685222   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:39:12.710986   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:39:12.742821   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:39:12.774649   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:39:12.808068   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:39:12.840999   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:39:12.873033   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:39:12.904176   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:39:12.935469   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:39:12.958451   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:39:12.983716   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:39:13.006372   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:39:13.026634   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:39:13.048003   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:39:13.067093   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:39:13.091242   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:39:13.097309   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:39:13.107657   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.111389   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.111438   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.118417   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:39:13.129698   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:39:13.140452   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.144194   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.144245   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.151266   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:39:13.161188   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:39:13.171891   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.176332   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.176413   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.184138   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:39:13.193625   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:39:13.197577   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:39:13.204628   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:39:13.211553   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:39:13.218449   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:39:13.225712   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:39:13.232770   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:39:13.239778   95759 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0919 22:39:13.239885   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:39:13.239907   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:39:13.239943   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:39:13.252386   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:13.252462   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:39:13.252520   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:39:13.261653   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:39:13.261771   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:39:13.271379   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:39:13.292763   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:39:13.314362   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:39:13.334791   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:39:13.338371   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:13.350977   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:13.456433   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:13.469559   95759 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:39:13.469884   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:13.472456   95759 out.go:179] * Verifying Kubernetes components...
	I0919 22:39:13.474707   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:13.588742   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:13.602600   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:39:13.602666   95759 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:39:13.602869   95759 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m03" to be "Ready" ...
	I0919 22:39:13.605956   95759 node_ready.go:49] node "ha-984158-m03" is "Ready"
	I0919 22:39:13.605979   95759 node_ready.go:38] duration metric: took 3.097172ms for node "ha-984158-m03" to be "Ready" ...
	I0919 22:39:13.605993   95759 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:39:13.606032   95759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:13.618211   95759 api_server.go:72] duration metric: took 148.610181ms to wait for apiserver process to appear ...
	I0919 22:39:13.618235   95759 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:39:13.618251   95759 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:39:13.622760   95759 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:39:13.623811   95759 api_server.go:141] control plane version: v1.34.0
	I0919 22:39:13.623838   95759 api_server.go:131] duration metric: took 5.597306ms to wait for apiserver health ...
	I0919 22:39:13.623847   95759 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:39:13.632153   95759 system_pods.go:59] 24 kube-system pods found
	I0919 22:39:13.632182   95759 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:39:13.632190   95759 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:13.632196   95759 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:13.632200   95759 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:13.632207   95759 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:39:13.632210   95759 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:13.632214   95759 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:13.632216   95759 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:39:13.632219   95759 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:13.632229   95759 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:13.632233   95759 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:39:13.632237   95759 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:13.632241   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:13.632247   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:39:13.632253   95759 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:13.632256   95759 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:13.632259   95759 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:39:13.632261   95759 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:13.632264   95759 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:13.632274   95759 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:39:13.632277   95759 system_pods.go:61] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:13.632282   95759 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:13.632285   95759 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:13.632288   95759 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:39:13.632295   95759 system_pods.go:74] duration metric: took 8.442512ms to wait for pod list to return data ...
	I0919 22:39:13.632305   95759 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:39:13.635316   95759 default_sa.go:45] found service account: "default"
	I0919 22:39:13.635337   95759 default_sa.go:55] duration metric: took 3.026488ms for default service account to be created ...
	I0919 22:39:13.635346   95759 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:39:13.733862   95759 system_pods.go:86] 24 kube-system pods found
	I0919 22:39:13.733908   95759 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:39:13.733922   95759 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:13.733929   95759 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:13.733937   95759 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:13.733945   95759 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:39:13.733952   95759 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:13.733958   95759 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:13.733964   95759 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:39:13.733969   95759 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:13.733974   95759 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:13.733985   95759 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:39:13.733995   95759 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:13.734001   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:13.734013   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:39:13.734018   95759 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:13.734021   95759 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:13.734024   95759 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:39:13.734027   95759 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:13.734033   95759 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:13.734044   95759 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:39:13.734052   95759 system_pods.go:89] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:13.734057   95759 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:13.734065   95759 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:13.734069   95759 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:39:13.734079   95759 system_pods.go:126] duration metric: took 98.726691ms to wait for k8s-apps to be running ...
	I0919 22:39:13.734091   95759 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:39:13.734175   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:13.747528   95759 system_svc.go:56] duration metric: took 13.410723ms WaitForService to wait for kubelet
	I0919 22:39:13.747570   95759 kubeadm.go:578] duration metric: took 277.970313ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:39:13.747595   95759 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:39:13.751576   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751598   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751610   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751613   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751616   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751619   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751622   95759 node_conditions.go:105] duration metric: took 4.023347ms to run NodePressure ...
	I0919 22:39:13.751634   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:39:13.751651   95759 start.go:255] writing updated cluster config ...
	I0919 22:39:13.753417   95759 out.go:203] 
	I0919 22:39:13.755135   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:13.755254   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:13.757081   95759 out.go:179] * Starting "ha-984158-m04" worker node in "ha-984158" cluster
	I0919 22:39:13.758394   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:39:13.759816   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:39:13.761015   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:39:13.761039   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:39:13.761051   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:39:13.761261   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:39:13.761304   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:39:13.761429   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:13.782360   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:39:13.782385   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:39:13.782406   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:39:13.782436   95759 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:13.782501   95759 start.go:364] duration metric: took 44.732µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:39:13.782524   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:13.782534   95759 fix.go:54] fixHost starting: m04
	I0919 22:39:13.782740   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:39:13.801027   95759 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Stopped err=<nil>
	W0919 22:39:13.801060   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:13.802864   95759 out.go:252] * Restarting existing docker container for "ha-984158-m04" ...
	I0919 22:39:13.802931   95759 cli_runner.go:164] Run: docker start ha-984158-m04
	I0919 22:39:14.055762   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:39:14.074848   95759 kic.go:430] container "ha-984158-m04" state is running.
	I0919 22:39:14.075262   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:39:14.094352   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:14.094594   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:14.094647   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:39:14.114064   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:14.114317   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0919 22:39:14.114330   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:14.114961   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50476->127.0.0.1:32828: read: connection reset by peer
	I0919 22:39:17.116460   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:20.118409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:23.120443   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:26.120776   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:29.121743   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:32.123258   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:35.125391   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:38.125915   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:41.126437   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:44.127525   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:47.128400   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:50.130402   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:53.132094   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:56.132448   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:59.133362   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:02.134004   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:05.136365   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:08.136767   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:11.137236   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:14.138295   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:17.139769   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:20.141642   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:23.143546   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:26.143966   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:29.144829   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:32.146423   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:35.148801   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:38.150005   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:41.150409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:44.150842   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:47.152406   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:50.154676   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:53.156471   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:56.157387   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:59.158366   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:02.160382   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:05.162387   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:08.162900   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:11.163385   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:14.164700   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:17.165484   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:20.167366   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:23.169809   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:26.170437   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:29.171409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:32.173443   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:35.175650   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:38.176984   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:41.177465   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:44.179757   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:47.181386   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:50.183757   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:53.185945   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:56.186445   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:59.187353   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:02.189451   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:05.191306   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:08.191935   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:11.192418   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:14.194206   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:42:14.194236   95759 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 22:42:14.194304   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.214461   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.214567   95759 machine.go:96] duration metric: took 3m0.119960942s to provisionDockerMachine
	I0919 22:42:14.214652   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:42:14.214684   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.238129   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.238280   95759 retry.go:31] will retry after 248.39527ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:14.487752   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.507066   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.507179   95759 retry.go:31] will retry after 241.490952ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:14.749696   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.769271   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.769394   95759 retry.go:31] will retry after 573.29064ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.342939   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.361305   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:15.361440   95759 retry.go:31] will retry after 493.546865ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.855177   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.876393   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:42:15.876503   95759 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:15.876520   95759 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.876565   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:42:15.876594   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.896632   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:15.896744   95759 retry.go:31] will retry after 211.367435ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.109288   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:16.130175   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:16.130270   95759 retry.go:31] will retry after 289.868834ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.420891   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:16.442472   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:16.442604   95759 retry.go:31] will retry after 547.590918ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.990359   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:17.008923   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:42:17.009049   95759 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:17.009064   95759 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:17.009073   95759 fix.go:56] duration metric: took 3m3.226540631s for fixHost
	I0919 22:42:17.009081   95759 start.go:83] releasing machines lock for "ha-984158-m04", held for 3m3.226570319s
	W0919 22:42:17.009092   95759 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:17.009191   95759 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:17.009203   95759 start.go:729] Will try again in 5 seconds ...
	I0919 22:42:22.010253   95759 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:42:22.010363   95759 start.go:364] duration metric: took 70.627µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:42:22.010395   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:42:22.010406   95759 fix.go:54] fixHost starting: m04
	I0919 22:42:22.010649   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:42:22.029262   95759 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Stopped err=<nil>
	W0919 22:42:22.029294   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:42:22.031096   95759 out.go:252] * Restarting existing docker container for "ha-984158-m04" ...
	I0919 22:42:22.031220   95759 cli_runner.go:164] Run: docker start ha-984158-m04
	I0919 22:42:22.294621   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:42:22.313475   95759 kic.go:430] container "ha-984158-m04" state is running.
	I0919 22:42:22.313799   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:42:22.333284   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:42:22.333514   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:42:22.333568   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:42:22.353907   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:42:22.354187   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0919 22:42:22.354204   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:42:22.354888   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51412->127.0.0.1:32833: read: connection reset by peer
	I0919 22:42:25.355457   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:28.356034   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:31.356407   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:34.358370   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:37.359693   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:40.360614   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:43.362397   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:46.363784   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:49.364408   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:52.366596   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:55.367888   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:58.369219   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:01.370395   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:04.371156   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:07.372724   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:10.373695   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:13.374908   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:16.375383   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:19.376388   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:22.378537   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:25.379508   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:28.380693   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:31.381372   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:34.383699   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:37.384935   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:40.385685   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:43.388048   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:46.388445   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:49.389657   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:52.391627   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:55.392687   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:58.393125   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:01.393619   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:04.395945   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:07.398372   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:10.398608   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:13.400912   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:16.401401   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:19.402479   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:22.404415   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:25.405562   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:28.406498   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:31.407755   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:34.410076   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:37.412454   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:40.413768   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:43.415168   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:46.416416   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:49.417399   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:52.419643   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:55.420363   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:58.420738   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:01.421609   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:04.423913   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:07.425430   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:10.426778   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:13.428381   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:16.429193   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:19.430490   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:22.432491   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:45:22.432543   95759 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 22:45:22.432609   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.452712   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.452777   95759 machine.go:96] duration metric: took 3m0.119250879s to provisionDockerMachine
	I0919 22:45:22.452858   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:45:22.452892   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.472911   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.473047   95759 retry.go:31] will retry after 202.283506ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:22.676548   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.694834   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.694965   95759 retry.go:31] will retry after 463.907197ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.159340   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.178560   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.178658   95759 retry.go:31] will retry after 365.232594ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.544210   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.564214   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:45:23.564366   95759 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:23.564390   95759 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.564449   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:45:23.564494   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.583703   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.583796   95759 retry.go:31] will retry after 343.872214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.928329   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.946762   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.946864   95759 retry.go:31] will retry after 341.564773ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.289296   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:24.312255   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:24.312369   95759 retry.go:31] will retry after 341.728488ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.655044   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:24.674698   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:45:24.674839   95759 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:24.674858   95759 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.674871   95759 fix.go:56] duration metric: took 3m2.664466794s for fixHost
	I0919 22:45:24.674881   95759 start.go:83] releasing machines lock for "ha-984158-m04", held for 3m2.664502957s
	W0919 22:45:24.674982   95759 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-984158" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.677468   95759 out.go:203] 
	W0919 22:45:24.678601   95759 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:24.678620   95759 out.go:285] * 
	W0919 22:45:24.680349   95759 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:45:24.681822   95759 out.go:203] 
	
	
	==> CRI-O <==
	Sep 19 22:38:34 ha-984158 crio[565]: time="2025-09-19 22:38:34.490551103Z" level=info msg="Starting container: b2cb38a999cac4269513a263840936a7f0a5f1ef129b45bd9f71e4b65f4c4a74" id=6d013997-4bc0-47b8-a2e4-8ad50a27feae name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:38:34 ha-984158 crio[565]: time="2025-09-19 22:38:34.498969531Z" level=info msg="Started container" PID=1368 containerID=b2cb38a999cac4269513a263840936a7f0a5f1ef129b45bd9f71e4b65f4c4a74 description=kube-system/coredns-66bc5c9577-ltjmz/coredns id=6d013997-4bc0-47b8-a2e4-8ad50a27feae name=/runtime.v1.RuntimeService/StartContainer sandboxID=815752732ad74ae8e5961e3c79b9a821b4903503b20978d661c98a6a36ef4b9d
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.902522587Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.906977791Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.907009772Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.907037293Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.911428136Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.911466965Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.911486751Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.915460017Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.915497091Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.915525773Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.919544523Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.919575130Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.012886161Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d4c92288-6a5e-4f04-96fc-76b8e890177a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.013169907Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d4c92288-6a5e-4f04-96fc-76b8e890177a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.013901636Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=be2f997d-9458-49d8-bca1-fcc18c2e9b9f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.014168511Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=be2f997d-9458-49d8-bca1-fcc18c2e9b9f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.018353225Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=00fb4cd8-8bf1-4b30-8398-7f8f2949db03 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.018511963Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.036610475Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8919f8bf0a44a05938e764851b8252bfdd952ff2d6aefa1882e35c8a0555438f/merged/etc/passwd: no such file or directory"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.036659847Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8919f8bf0a44a05938e764851b8252bfdd952ff2d6aefa1882e35c8a0555438f/merged/etc/group: no such file or directory"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.095888981Z" level=info msg="Created container f73602ecef49bd46313a999f2137eea9370c3511211c3961b8b8c90352ad183f: kube-system/storage-provisioner/storage-provisioner" id=00fb4cd8-8bf1-4b30-8398-7f8f2949db03 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.096561974Z" level=info msg="Starting container: f73602ecef49bd46313a999f2137eea9370c3511211c3961b8b8c90352ad183f" id=4af7ccf6-09cd-4a8b-a8a3-ab196defe346 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.104038077Z" level=info msg="Started container" PID=1741 containerID=f73602ecef49bd46313a999f2137eea9370c3511211c3961b8b8c90352ad183f description=kube-system/storage-provisioner/storage-provisioner id=4af7ccf6-09cd-4a8b-a8a3-ab196defe346 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c833b8c10762b8d7272f8c569836ab444d6d5b309d15da090c6b1664db70ed7c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f73602ecef49b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Running             storage-provisioner       3                   c833b8c10762b       storage-provisioner
	b2cb38a999cac       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   1                   815752732ad74       coredns-66bc5c9577-ltjmz
	676fc8265fa71       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   6 minutes ago       Running             busybox                   1                   853e9db2bdfa8       busybox-7b57f96db7-rnjl7
	7e1e5941c1568       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   6 minutes ago       Running             kindnet-cni               1                   547d271717250       kindnet-rd882
	c9027fdf07d43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Exited              storage-provisioner       2                   c833b8c10762b       storage-provisioner
	a22f43664887c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   6 minutes ago       Running             kube-proxy                1                   d51eb4228f1eb       kube-proxy-hdxxn
	377f1c9e1defe       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   6 minutes ago       Running             coredns                   1                   e756edadac294       coredns-66bc5c9577-5gnbx
	55f2dff5151a8       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   6 minutes ago       Running             kube-apiserver            1                   0d488246e5b37       kube-apiserver-ha-984158
	79c74b643f5a5       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   6 minutes ago       Running             kube-scheduler            1                   8f2d6202aa772       kube-scheduler-ha-984158
	32b11c5432de7       765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23   6 minutes ago       Running             kube-vip                  0                   01eeb16fe8f46       kube-vip-ha-984158
	935ae0c237d97       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   6 minutes ago       Running             kube-controller-manager   1                   8871adc8c9755       kube-controller-manager-ha-984158
	13b67e56860f8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 minutes ago       Running             etcd                      1                   0fb5a565c96e5       etcd-ha-984158
	
	
	==> coredns [377f1c9e1defee6bb59c215f0a1a03ae29aa5b77855a39725abe9d88f4182f71] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47318 - 34366 "HINFO IN 8418387040146284568.7180250627065820856. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.092087824s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [b2cb38a999cac4269513a263840936a7f0a5f1ef129b45bd9f71e4b65f4c4a74] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47142 - 36068 "HINFO IN 3054302858159562754.8459958995054926466. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023807531s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-984158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:33:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:45:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-984158
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 ce0d9390578a44a698c3fda69fb20273
	  System UUID:                e5418393-d7bf-429a-8ff0-9daee26920dd
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rnjl7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-5gnbx             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 coredns-66bc5c9577-ltjmz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-ha-984158                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-rd882                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-984158             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-984158    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-hdxxn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-984158             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-984158                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m53s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                 From             Message
	  ----    ------                   ----                ----             -------
	  Normal  Starting                 11m                 kube-proxy       
	  Normal  Starting                 6m51s               kube-proxy       
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)   kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                 kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)   kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)   kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                 kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                 kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                 kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           11m                 node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  NodeReady                11m                 kubelet          Node ha-984158 status is now: NodeReady
	  Normal  RegisteredNode           11m                 node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           10m                 node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           8m57s               node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  Starting                 7m                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m59s (x8 over 7m)  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m59s (x8 over 7m)  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m59s (x8 over 7m)  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m50s               node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           6m50s               node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           6m14s               node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	
	
	Name:               ha-984158-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:45:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-984158-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 82b431cbd7af4c3f980669ae3ee3bdc5
	  System UUID:                370c0cbf-a33c-464e-aad2-0ef3d76b4ebb
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8s7jn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-984158-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-th979                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-984158-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-984158-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-plrn2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-984158-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-984158-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m15s                  kube-proxy       
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  RegisteredNode           11m                    node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  NodeHasNoDiskPressure    9m2s (x8 over 9m2s)    kubelet          Node ha-984158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m2s (x8 over 9m2s)    kubelet          Node ha-984158-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m2s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m2s (x8 over 9m2s)    kubelet          Node ha-984158-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           8m57s                  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  Starting                 6m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m58s (x8 over 6m58s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m58s (x8 over 6m58s)  kubelet          Node ha-984158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m58s (x8 over 6m58s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m50s                  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           6m50s                  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	
	
	Name:               ha-984158-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_52_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:51 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m03
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:45:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:45:05 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:45:05 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:45:05 +0000   Fri, 19 Sep 2025 22:34:51 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:45:05 +0000   Fri, 19 Sep 2025 22:35:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.4
	  Hostname:    ha-984158-m03
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 7c08bd0fbe0a42fe8365a6eeff6e89e7
	  System UUID:                a60f86ef-6d86-4217-85ca-ad02416ddc34
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-c7qf4                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-984158-m03                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         10m
	  kube-system                 kindnet-269nt                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      10m
	  kube-system                 kube-apiserver-ha-984158-m03             250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-ha-984158-m03    200m (2%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-proxy-k2drm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-scheduler-ha-984158-m03             100m (1%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-vip-ha-984158-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 10m                    kube-proxy       
	  Normal  RegisteredNode           10m                    node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode           8m57s                  node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode           6m50s                  node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  RegisteredNode           6m50s                  node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	  Normal  Starting                 6m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m20s (x8 over 6m20s)  kubelet          Node ha-984158-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s (x8 over 6m20s)  kubelet          Node ha-984158-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s (x8 over 6m20s)  kubelet          Node ha-984158-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-984158-m03 event: Registered Node ha-984158-m03 in Controller
	
	
	==> dmesg <==
	[  +0.103037] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029723] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.096733] kauditd_printk_skb: 47 callbacks suppressed
	[Sep19 22:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.041768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.022949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023825] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	
	
	==> etcd [13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87] <==
	{"level":"warn","ts":"2025-09-19T22:39:05.374803Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.475194Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.478732Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.574408Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.674297Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.696181Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.702525Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.703978Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.718893Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.725945Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.729555Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.748042Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.774836Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.822315Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.874213Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-09-19T22:39:05.926024Z","caller":"rafthttp/peer.go:254","msg":"dropped internal Raft message since sending buffer is full","message-type":"MsgHeartbeat","local-member-id":"aec36adc501070cc","from":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"info","ts":"2025-09-19T22:39:07.225632Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"e8495135083f8257","stream-type":"stream Message"}
	{"level":"info","ts":"2025-09-19T22:39:07.225685Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:39:07.225724Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:39:07.226913Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"e8495135083f8257","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:39:07.226991Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:39:07.240674Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:39:07.244098Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:39:07.597341Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e8495135083f8257","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-09-19T22:39:07.597413Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e8495135083f8257","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	
	
	==> kernel <==
	 22:45:26 up  1:27,  0 users,  load average: 0.21, 0.55, 0.57
	Linux ha-984158 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7e1e5941c1568be6947d5879f8b05807535d937790e13f1de20f69c7cb7f0ccd] <==
	I0919 22:44:44.909474       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:44:54.902163       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:44:54.902196       1 main.go:301] handling current node
	I0919 22:44:54.902212       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:44:54.902217       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:44:54.902415       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:44:54.902428       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:45:04.902949       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:45:04.902981       1 main.go:301] handling current node
	I0919 22:45:04.902997       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:45:04.903003       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:45:04.903212       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:45:04.903225       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:45:14.910562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:45:14.910592       1 main.go:301] handling current node
	I0919 22:45:14.910608       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:45:14.910612       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:45:14.910787       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:45:14.910796       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:45:24.910192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:45:24.910232       1 main.go:301] handling current node
	I0919 22:45:24.910253       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:45:24.910259       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:45:24.910469       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:45:24.910478       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645] <==
	I0919 22:38:33.237483       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 22:38:33.237492       1 cache.go:39] Caches are synced for autoregister controller
	I0919 22:38:33.244473       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0919 22:38:33.256040       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0919 22:38:33.256074       1 policy_source.go:240] refreshing policies
	I0919 22:38:33.258725       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 22:38:33.330813       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 22:38:33.340553       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0919 22:38:33.343923       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0919 22:38:34.057940       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0919 22:38:34.123968       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 22:38:34.654257       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0919 22:38:36.563731       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:38:37.013446       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:39:07.528152       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0919 22:39:58.806991       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:39:59.831450       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:12.701181       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:22.300169       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:28.420805       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:42.481948       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:43.538989       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:45.026909       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:44:54.365379       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:45:11.122450       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba] <==
	I0919 22:38:36.559743       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 22:38:36.559750       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 22:38:36.559764       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 22:38:36.559819       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 22:38:36.560391       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0919 22:38:36.560524       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:38:36.561755       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:38:36.563075       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0919 22:38:36.564243       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:38:36.565318       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 22:38:36.567600       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:38:36.567791       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 22:38:36.567913       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:38:36.568459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:38:36.568957       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:38:36.577191       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 22:38:36.580467       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:38:36.580630       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:38:36.580760       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158"
	I0919 22:38:36.580809       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m02"
	I0919 22:38:36.580815       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m03"
	I0919 22:38:36.580872       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:38:36.590818       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:39:15.982637       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-6rhpz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-6rhpz\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:39:15.983309       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"4dd58d83-a50d-4db8-9919-ac6b8b041c9e", APIVersion:"v1", ResourceVersion:"312", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-6rhpz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-6rhpz": the object has been modified; please apply your changes to the latest version and try again
	
	
	==> kube-proxy [a22f43664887c7fcbb5c6716c9592a2cd654e455fd905f9edd287a2f6c9aba58] <==
	I0919 22:38:34.512575       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:38:34.579894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:38:34.680953       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:38:34.680992       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:38:34.681200       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:38:34.704454       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:38:34.704534       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:38:34.710440       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:38:34.710834       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:38:34.710880       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:38:34.712458       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:38:34.712504       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:38:34.712543       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:38:34.712552       1 config.go:309] "Starting node config controller"
	I0919 22:38:34.712564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:38:34.712555       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:38:34.712587       1 config.go:200] "Starting service config controller"
	I0919 22:38:34.712613       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:38:34.812688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:38:34.812708       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:38:34.812734       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:38:34.812768       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9] <==
	I0919 22:38:28.535240       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:38:33.134307       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 22:38:33.134372       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 22:38:33.134385       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:38:33.134394       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:38:33.174419       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:38:33.174609       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:38:33.180536       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:38:33.180680       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:38:33.184947       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:38:33.185091       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:38:33.284411       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:43:16 ha-984158 kubelet[720]: E0919 22:43:16.951587     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321796951342086  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:26 ha-984158 kubelet[720]: E0919 22:43:26.952677     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321806952453683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:26 ha-984158 kubelet[720]: E0919 22:43:26.952714     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321806952453683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:36 ha-984158 kubelet[720]: E0919 22:43:36.954198     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321816953914416  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:36 ha-984158 kubelet[720]: E0919 22:43:36.954234     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321816953914416  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:46 ha-984158 kubelet[720]: E0919 22:43:46.955354     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321826955128886  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:46 ha-984158 kubelet[720]: E0919 22:43:46.955393     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321826955128886  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:56 ha-984158 kubelet[720]: E0919 22:43:56.956480     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321836956221895  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:56 ha-984158 kubelet[720]: E0919 22:43:56.956517     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321836956221895  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:06 ha-984158 kubelet[720]: E0919 22:44:06.958459     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321846958195920  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:06 ha-984158 kubelet[720]: E0919 22:44:06.958501     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321846958195920  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:16 ha-984158 kubelet[720]: E0919 22:44:16.959975     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321856959733254  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:16 ha-984158 kubelet[720]: E0919 22:44:16.960016     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321856959733254  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:26 ha-984158 kubelet[720]: E0919 22:44:26.961918     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321866961564924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:26 ha-984158 kubelet[720]: E0919 22:44:26.961955     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321866961564924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:36 ha-984158 kubelet[720]: E0919 22:44:36.964584     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321876963854129  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:36 ha-984158 kubelet[720]: E0919 22:44:36.964626     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321876963854129  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:46 ha-984158 kubelet[720]: E0919 22:44:46.966592     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321886966345111  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:46 ha-984158 kubelet[720]: E0919 22:44:46.966634     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321886966345111  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:56 ha-984158 kubelet[720]: E0919 22:44:56.968415     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321896968168694  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:56 ha-984158 kubelet[720]: E0919 22:44:56.968455     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321896968168694  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:06 ha-984158 kubelet[720]: E0919 22:45:06.969597     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321906969346664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:06 ha-984158 kubelet[720]: E0919 22:45:06.969639     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321906969346664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:16 ha-984158 kubelet[720]: E0919 22:45:16.971464     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321916971187127  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:16 ha-984158 kubelet[720]: E0919 22:45:16.971505     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321916971187127  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-984158 -n ha-984158
helpers_test.go:269: (dbg) Run:  kubectl --context ha-984158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartClusterKeepsNodes (476.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (13.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 node delete m03 --alsologtostderr -v 5: (10.601858468s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (533.968ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:45:37.695766  105829 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:45:37.696036  105829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:45:37.696047  105829 out.go:374] Setting ErrFile to fd 2...
	I0919 22:45:37.696053  105829 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:45:37.696279  105829 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:45:37.696490  105829 out.go:368] Setting JSON to false
	I0919 22:45:37.696514  105829 mustload.go:65] Loading cluster: ha-984158
	I0919 22:45:37.696647  105829 notify.go:220] Checking for updates...
	I0919 22:45:37.696943  105829 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:45:37.696967  105829 status.go:174] checking status of ha-984158 ...
	I0919 22:45:37.697535  105829 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:45:37.722484  105829 status.go:371] ha-984158 host status = "Running" (err=<nil>)
	I0919 22:45:37.722532  105829 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:45:37.722863  105829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:45:37.742166  105829 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:45:37.742439  105829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:45:37.742475  105829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:45:37.762927  105829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:45:37.857608  105829 ssh_runner.go:195] Run: systemctl --version
	I0919 22:45:37.862062  105829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:45:37.874496  105829 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:45:37.934617  105829 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-19 22:45:37.924453946 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:45:37.935257  105829 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:45:37.935294  105829 api_server.go:166] Checking apiserver status ...
	I0919 22:45:37.935334  105829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:45:37.947091  105829 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/954/cgroup
	W0919 22:45:37.957546  105829 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/954/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:45:37.957604  105829 ssh_runner.go:195] Run: ls
	I0919 22:45:37.961420  105829 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:45:37.966419  105829 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:45:37.966443  105829 status.go:463] ha-984158 apiserver status = Running (err=<nil>)
	I0919 22:45:37.966454  105829 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:45:37.966468  105829 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:45:37.966700  105829 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:45:37.984871  105829 status.go:371] ha-984158-m02 host status = "Running" (err=<nil>)
	I0919 22:45:37.984894  105829 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:45:37.985161  105829 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:45:38.005017  105829 host.go:66] Checking if "ha-984158-m02" exists ...
	I0919 22:45:38.005386  105829 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:45:38.005437  105829 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:45:38.023790  105829 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:45:38.117572  105829 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:45:38.130450  105829 kubeconfig.go:125] found "ha-984158" server: "https://192.168.49.254:8443"
	I0919 22:45:38.130480  105829 api_server.go:166] Checking apiserver status ...
	I0919 22:45:38.130522  105829 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:45:38.142538  105829 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/427/cgroup
	W0919 22:45:38.152536  105829 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/427/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:45:38.152595  105829 ssh_runner.go:195] Run: ls
	I0919 22:45:38.156191  105829 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0919 22:45:38.160357  105829 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0919 22:45:38.160379  105829 status.go:463] ha-984158-m02 apiserver status = Running (err=<nil>)
	I0919 22:45:38.160387  105829 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:45:38.160400  105829 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:45:38.160653  105829 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:45:38.180566  105829 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:45:38.180587  105829 status.go:384] host is not running, skipping remaining checks
	I0919 22:45:38.180593  105829 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:497: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-984158
helpers_test.go:243: (dbg) docker inspect ha-984158:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	        "Created": "2025-09-19T22:33:24.996172492Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 95956,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:38:20.505682313Z",
	            "FinishedAt": "2025-09-19T22:38:19.832335475Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hosts",
	        "LogPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca-json.log",
	        "Name": "/ha-984158",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-984158:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-984158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	                "LowerDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-984158",
	                "Source": "/var/lib/docker/volumes/ha-984158/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-984158",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-984158",
	                "name.minikube.sigs.k8s.io": "ha-984158",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e4fdcd3468198deb98d4a8f23cbd640a198a460cfea4c64e865edb3f33eaab9",
	            "SandboxKey": "/var/run/docker/netns/8e4fdcd34681",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-984158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:4b:fa:16:2f:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1b6c79ac61dbabfd8f1ce8959ab9a2616212ddaf4680b1bb2cc7b6f6005d0e",
	                    "EndpointID": "b56ee79fb4c604077e565626768d3a9928d875fe4a72dd45dd22369025cf8f31",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-984158",
	                        "0e7c4b5cff2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-984158 -n ha-984158
helpers_test.go:252: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 logs -n 25: (1.199794577s)
helpers_test.go:260: TestMultiControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp testdata/cp-test.txt ha-984158-m04:/home/docker/cp-test.txt                                                             │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m04.txt │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m04_ha-984158.txt                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158.txt                                                 │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ node    │ ha-984158 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ node    │ ha-984158 node start m02 --alsologtostderr -v 5                                                                                      │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ node    │ ha-984158 node list --alsologtostderr -v 5                                                                                           │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ stop    │ ha-984158 stop --alsologtostderr -v 5                                                                                                │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:38 UTC │
	│ start   │ ha-984158 start --wait true --alsologtostderr -v 5                                                                                   │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ node    │ ha-984158 node list --alsologtostderr -v 5                                                                                           │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:45 UTC │                     │
	│ node    │ ha-984158 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:45 UTC │ 19 Sep 25 22:45 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:38:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:38:20.249865   95759 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:20.249988   95759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:20.249994   95759 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:20.250000   95759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:20.250249   95759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:38:20.250707   95759 out.go:368] Setting JSON to false
	I0919 22:38:20.251700   95759 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4850,"bootTime":1758316650,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:38:20.251800   95759 start.go:140] virtualization: kvm guest
	I0919 22:38:20.254109   95759 out.go:179] * [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:38:20.255764   95759 notify.go:220] Checking for updates...
	I0919 22:38:20.255845   95759 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:38:20.257481   95759 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:38:20.259062   95759 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:20.260518   95759 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:38:20.262187   95759 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:38:20.263765   95759 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:38:20.265783   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:20.265907   95759 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:38:20.294398   95759 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:38:20.294613   95759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:20.361388   95759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:38:20.349869718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:20.361497   95759 docker.go:318] overlay module found
	I0919 22:38:20.363722   95759 out.go:179] * Using the docker driver based on existing profile
	I0919 22:38:20.365305   95759 start.go:304] selected driver: docker
	I0919 22:38:20.365327   95759 start.go:918] validating driver "docker" against &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:20.365467   95759 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:38:20.365552   95759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:20.420337   95759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:38:20.409819419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:20.420989   95759 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:38:20.421017   95759 cni.go:84] Creating CNI manager for ""
	I0919 22:38:20.421096   95759 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:38:20.421172   95759 start.go:348] cluster config:
	{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:20.423543   95759 out.go:179] * Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	I0919 22:38:20.425622   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:38:20.427928   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:38:20.429486   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:20.429552   95759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:38:20.429561   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:38:20.429624   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:38:20.429683   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:38:20.429696   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:38:20.429903   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:20.451753   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:38:20.451777   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:38:20.451800   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:38:20.451830   95759 start.go:360] acquireMachinesLock for ha-984158: {Name:mkc72a6d4fef468a73a10e88f019b77c34dadd97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:38:20.451903   95759 start.go:364] duration metric: took 52.261µs to acquireMachinesLock for "ha-984158"
	I0919 22:38:20.451929   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:38:20.451935   95759 fix.go:54] fixHost starting: 
	I0919 22:38:20.452267   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:38:20.470646   95759 fix.go:112] recreateIfNeeded on ha-984158: state=Stopped err=<nil>
	W0919 22:38:20.470675   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:38:20.473543   95759 out.go:252] * Restarting existing docker container for "ha-984158" ...
	I0919 22:38:20.473635   95759 cli_runner.go:164] Run: docker start ha-984158
	I0919 22:38:20.725924   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:38:20.747322   95759 kic.go:430] container "ha-984158" state is running.
	I0919 22:38:20.748445   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:20.768582   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:20.768847   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:38:20.768938   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:20.788669   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:20.788894   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:20.788907   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:38:20.789621   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46262->127.0.0.1:32813: read: connection reset by peer
	I0919 22:38:23.928529   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:38:23.928563   95759 ubuntu.go:182] provisioning hostname "ha-984158"
	I0919 22:38:23.928620   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:23.947237   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:23.947447   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:23.947461   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158 && echo "ha-984158" | sudo tee /etc/hostname
	I0919 22:38:24.095390   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:38:24.095477   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.113617   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:24.113853   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:24.113878   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:38:24.249977   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:38:24.250008   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:38:24.250048   95759 ubuntu.go:190] setting up certificates
	I0919 22:38:24.250058   95759 provision.go:84] configureAuth start
	I0919 22:38:24.250116   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:24.268530   95759 provision.go:143] copyHostCerts
	I0919 22:38:24.268578   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:24.268614   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:38:24.268624   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:24.268699   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:38:24.268797   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:24.268816   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:38:24.268820   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:24.268848   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:38:24.268908   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:24.268928   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:38:24.268932   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:24.268959   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:38:24.269015   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158 san=[127.0.0.1 192.168.49.2 ha-984158 localhost minikube]
	I0919 22:38:24.530322   95759 provision.go:177] copyRemoteCerts
	I0919 22:38:24.530388   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:38:24.530429   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.549937   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:24.649314   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:38:24.649386   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:38:24.674567   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:38:24.674639   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:38:24.700190   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:38:24.700255   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:38:24.725998   95759 provision.go:87] duration metric: took 475.930644ms to configureAuth
	I0919 22:38:24.726025   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:38:24.726265   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:24.726378   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.744668   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:24.744868   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:24.744887   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:38:25.041744   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:38:25.041773   95759 machine.go:96] duration metric: took 4.2729084s to provisionDockerMachine
	I0919 22:38:25.041790   95759 start.go:293] postStartSetup for "ha-984158" (driver="docker")
	I0919 22:38:25.041804   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:38:25.041885   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:38:25.041937   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.061613   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.158944   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:38:25.162445   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:38:25.162473   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:38:25.162481   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:38:25.162487   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:38:25.162497   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:38:25.162543   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:38:25.162612   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:38:25.162622   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:38:25.162697   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:38:25.171420   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:25.196548   95759 start.go:296] duration metric: took 154.74522ms for postStartSetup
	I0919 22:38:25.196622   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:25.196658   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.214818   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.307266   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:38:25.311757   95759 fix.go:56] duration metric: took 4.859817354s for fixHost
	I0919 22:38:25.311786   95759 start.go:83] releasing machines lock for "ha-984158", held for 4.859867111s
	I0919 22:38:25.311855   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:25.331292   95759 ssh_runner.go:195] Run: cat /version.json
	I0919 22:38:25.331342   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.331445   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:38:25.331519   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.350964   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.351259   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.521285   95759 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:25.525969   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:38:25.668131   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:38:25.673196   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:25.683302   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:38:25.683463   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:25.693199   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:38:25.693229   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:38:25.693261   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:38:25.693301   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:38:25.705935   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:38:25.717521   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:38:25.717575   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:38:25.730590   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:38:25.742679   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:38:25.806884   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:38:25.876321   95759 docker.go:234] disabling docker service ...
	I0919 22:38:25.876399   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:38:25.889742   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:38:25.902299   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:38:25.968552   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:38:26.035171   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:38:26.047090   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:38:26.063771   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:38:26.063823   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.074242   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:38:26.074296   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.085364   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.096159   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.106569   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:38:26.116384   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.127163   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.138533   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.149140   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:38:26.157845   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:38:26.166573   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:26.230447   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:38:26.333573   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:38:26.333644   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:38:26.337977   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:38:26.338040   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:38:26.341911   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:38:26.375206   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:38:26.375273   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:38:26.410086   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:38:26.448363   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:38:26.449629   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:38:26.467494   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:38:26.471488   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:38:26.484310   95759 kubeadm.go:875] updating cluster {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:38:26.484505   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:26.484557   95759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:38:26.531218   95759 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:38:26.531242   95759 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:38:26.531296   95759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:38:26.567181   95759 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:38:26.567205   95759 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:38:26.567217   95759 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:38:26.567354   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:38:26.567443   95759 ssh_runner.go:195] Run: crio config
	I0919 22:38:26.612533   95759 cni.go:84] Creating CNI manager for ""
	I0919 22:38:26.612558   95759 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:38:26.612573   95759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:38:26.612596   95759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-984158 NodeName:ha-984158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:38:26.612731   95759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-984158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:38:26.612751   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:38:26.612791   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:38:26.625916   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:26.626026   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:38:26.626083   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:38:26.636322   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:38:26.636382   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:38:26.645958   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0919 22:38:26.665184   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:38:26.684627   95759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0919 22:38:26.703734   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:38:26.722194   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:38:26.726033   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:38:26.737748   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:26.802332   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:38:26.828015   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.2
	I0919 22:38:26.828140   95759 certs.go:194] generating shared ca certs ...
	I0919 22:38:26.828156   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:26.828370   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:38:26.828426   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:38:26.828439   95759 certs.go:256] generating profile certs ...
	I0919 22:38:26.828533   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:38:26.828559   95759 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24
	I0919 22:38:26.828573   95759 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:38:27.179556   95759 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 ...
	I0919 22:38:27.179596   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24: {Name:mk0ca61656ed051ffa5dbf8b847da7c47b965f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.179810   95759 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24 ...
	I0919 22:38:27.179828   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24: {Name:mk16b6aae6417eca80799eff0a4c27dc0860bcd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.179937   95759 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:38:27.180098   95759 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:38:27.180260   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:38:27.180276   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:38:27.180289   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:38:27.180307   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:38:27.180321   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:38:27.180334   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:38:27.180354   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:38:27.180364   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:38:27.180373   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:38:27.180419   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:38:27.180445   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:38:27.180454   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:38:27.180474   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:38:27.180497   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:38:27.180517   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:38:27.180557   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:27.180607   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.180624   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.180637   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.181195   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:38:27.209358   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:38:27.235624   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:38:27.260629   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:38:27.286335   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:38:27.312745   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:38:27.340226   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:38:27.366125   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:38:27.395452   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:38:27.424801   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:38:27.463750   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:38:27.502091   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:38:27.530600   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:38:27.538166   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:38:27.552357   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.559014   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.559181   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.569405   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:38:27.582829   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:38:27.597217   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.602410   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.602472   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.610784   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:38:27.624272   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:38:27.635899   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.640089   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.640162   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.647669   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:38:27.657702   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:38:27.661673   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:38:27.669449   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:38:27.676756   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:38:27.683701   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:38:27.690945   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:38:27.698327   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:38:27.705328   95759 kubeadm.go:392] StartCluster: {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:27.705437   95759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:38:27.705491   95759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:38:27.743232   95759 cri.go:89] found id: "55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645"
	I0919 22:38:27.743258   95759 cri.go:89] found id: "79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9"
	I0919 22:38:27.743263   95759 cri.go:89] found id: "32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3"
	I0919 22:38:27.743269   95759 cri.go:89] found id: "935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba"
	I0919 22:38:27.743273   95759 cri.go:89] found id: "13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87"
	I0919 22:38:27.743277   95759 cri.go:89] found id: ""
	I0919 22:38:27.743327   95759 ssh_runner.go:195] Run: sudo runc list -f json
	I0919 22:38:27.766931   95759 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87","pid":859,"status":"running","bundle":"/run/containers/storage/overlay-containers/13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87/userdata","rootfs":"/var/lib/containers/storage/overlay/442db62cd7567e3c806501d825c6c5d23003b614741e7fbf0e795a362ea67a21/merged","created":"2025-09-19T22:38:27.457722678Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"n
ame\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.401544575Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b69a60c29223d
c4628f1e45acc16ccdb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-984158_b69a60c29223dc4628f1e45acc16ccdb/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/442db62cd7567e3c806501d825c6c5d23003b614741e7fbf0e795a362ea67a21/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0fb5a565c96e537910c2f0be84cba5e78d505d3fc126b65c22ff047a404b942a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0fb5a565c96e537910c2f0be84cba5e78d505d3fc126b65c22ff047a404b942a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"
/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/containers/etcd/ee72b99d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b69a60c29223dc4628f1e45acc16ccdb","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"b69a60c29223dc4628f1e45acc16ccdb","kub
ernetes.io/config.seen":"2025-09-19T22:38:26.901880352Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3","pid":878,"status":"running","bundle":"/run/containers/storage/overlay-containers/32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3/userdata","rootfs":"/var/lib/containers/storage/overlay/72e57a2592f75caf73cfa22398d5c5c23f84604ab07514c7bceaf51f91d603f5/merged","created":"2025-09-19T22:38:27.465010624Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMe
ssagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.416092699Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17c8e4bb
866faa0106347d8b7bccd341\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-vip-ha-984158_17c8e4bb866faa0106347d8b7bccd341/kube-vip/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72e57a2592f75caf73cfa22398d5c5c23f84604ab07514c7bceaf51f91d603f5/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/01eeb16fe8f462df27f16cc298e1b9267fc8916156571e710626134b712b0cbe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"01eeb16fe8f462df27f16cc298e1b9267fc8916156571e710626134b712b0cbe","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"cont
ainer_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/17c8e4bb866faa0106347d8b7bccd341/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/17c8e4bb866faa0106347d8b7bccd341/containers/kube-vip/a6d77d36\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.hash":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.seen":"2025-09-19T22:38:26.901891443Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd
.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645","pid":954,"status":"running","bundle":"/run/containers/storage/overlay-containers/55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645/userdata","rootfs":"/var/lib/containers/storage/overlay/118384c8d6dc773d29b1dc159de9c9ee23b8eaeb8bcc8413b688fa07b21abc09/merged","created":"2025-09-19T22:38:27.515032823Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.
hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.443516596Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-98415
8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a8e2ca3a88a914207b16de44248445e2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-984158_a8e2ca3a88a914207b16de44248445e2/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/118384c8d6dc773d29b1dc159de9c9ee23b8eaeb8bcc8413b688fa07b21abc09/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0d488246e5b370f4828f5c11e5390777cc4cb5ea84090c958d6b601b35235de5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0d488246e5b370f4828f5c11e5390777cc4cb5ea84090c958d6b601b35235de5","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kuberne
tes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/containers/kube-apiserver/d0001fc3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"hos
t_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a8e2ca3a88a914207b16de44248445e2","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"a8e2ca3a88a914207b16de44248445e2","kubernetes.io/config.seen":"2025-09-19T22:38:26.901886915Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79c74b643f5a5959b25d582e997875f3399705b
3da970e161badc0d1521410a9","pid":921,"status":"running","bundle":"/run/containers/storage/overlay-containers/79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9/userdata","rootfs":"/var/lib/containers/storage/overlay/fc06cd1000c85e9cd4673a36b81650123792de7d25d573330b62dfab20204623/merged","created":"2025-09-19T22:38:27.502254065Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.ku
bernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.438041518Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17a21a02ffe1f8dd7b43dae71452cdad\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-
scheduler-ha-984158_17a21a02ffe1f8dd7b43dae71452cdad/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fc06cd1000c85e9cd4673a36b81650123792de7d25d573330b62dfab20204623/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8f2d6202aa772c3f9122a164a8b2d4d7ee64338d9bc1d0ea92d9989d81da3a27/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8f2d6202aa772c3f9122a164a8b2d4d7ee64338d9bc1d0ea92d9989d81da3a27","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\"
:\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/containers/kube-scheduler/6dc9da94\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.hash":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.seen":"2025-09-19T22:38:26.901890185Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDepen
dencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba","pid":903,"status":"running","bundle":"/run/containers/storage/overlay-containers/935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba/userdata","rootfs":"/var/lib/containers/storage/overlay/294f08962cf3b85109646e67c49c8e611f769c418e606db4b191cb3508ca3407/merged","created":"2025-09-19T22:38:27.483620953Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7e
aa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.414415487Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controlle
r-manager-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"560e6b05a580a11369967b27d393af16\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-984158_560e6b05a580a11369967b27d393af16/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/294f08962cf3b85109646e67c49c8e611f769c418e606db4b191cb3508ca3407/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-984158_kube-system_560e6b05a580a11369967b27d393af16_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8871adc8c975575b11386f10c2278ccafbe420230c4e6fe1c76b13467b620c80/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8871adc8c975575b11386f10c2278ccafbe420230c4e6fe1c76b13467b620c80","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-984158_kube-system_560e6b05a580a113699
67b27d393af16_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/containers/kube-controller-manager/e63161fc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonl
y\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"560e6b05a580a11369967b27d393af16","kubernetes.io/config.hash":"560e6b05a580a11369967b27d393af16",
"kubernetes.io/config.seen":"2025-09-19T22:38:26.901888813Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0919 22:38:27.767290   95759 cri.go:126] list returned 5 containers
	I0919 22:38:27.767310   95759 cri.go:129] container: {ID:13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87 Status:running}
	I0919 22:38:27.767328   95759 cri.go:135] skipping {13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87 running}: state = "running", want "paused"
	I0919 22:38:27.767344   95759 cri.go:129] container: {ID:32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3 Status:running}
	I0919 22:38:27.767353   95759 cri.go:135] skipping {32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3 running}: state = "running", want "paused"
	I0919 22:38:27.767369   95759 cri.go:129] container: {ID:55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645 Status:running}
	I0919 22:38:27.767378   95759 cri.go:135] skipping {55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645 running}: state = "running", want "paused"
	I0919 22:38:27.767384   95759 cri.go:129] container: {ID:79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9 Status:running}
	I0919 22:38:27.767393   95759 cri.go:135] skipping {79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9 running}: state = "running", want "paused"
	I0919 22:38:27.767399   95759 cri.go:129] container: {ID:935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba Status:running}
	I0919 22:38:27.767405   95759 cri.go:135] skipping {935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba running}: state = "running", want "paused"
	I0919 22:38:27.767454   95759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:38:27.777467   95759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:38:27.777485   95759 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:38:27.777529   95759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:38:27.786748   95759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:27.787254   95759 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-984158" does not appear in /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:27.787385   95759 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14668/kubeconfig needs updating (will repair): [kubeconfig missing "ha-984158" cluster setting kubeconfig missing "ha-984158" context setting]
	I0919 22:38:27.787739   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.788395   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:38:27.788915   95759 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:38:27.788933   95759 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:38:27.788940   95759 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:38:27.788945   95759 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:38:27.788950   95759 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:38:27.788983   95759 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:38:27.789419   95759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:38:27.799384   95759 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:38:27.799408   95759 kubeadm.go:593] duration metric: took 21.916898ms to restartPrimaryControlPlane
	I0919 22:38:27.799419   95759 kubeadm.go:394] duration metric: took 94.114072ms to StartCluster
	I0919 22:38:27.799438   95759 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.799508   95759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:27.800283   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.800531   95759 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:38:27.800560   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:38:27.800569   95759 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:38:27.800796   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:27.803656   95759 out.go:179] * Enabled addons: 
	I0919 22:38:27.804977   95759 addons.go:514] duration metric: took 4.403593ms for enable addons: enabled=[]
	I0919 22:38:27.805014   95759 start.go:246] waiting for cluster config update ...
	I0919 22:38:27.805026   95759 start.go:255] writing updated cluster config ...
	I0919 22:38:27.806661   95759 out.go:203] 
	I0919 22:38:27.808147   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:27.808240   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:27.809900   95759 out.go:179] * Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	I0919 22:38:27.811058   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:38:27.812367   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:38:27.813643   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:27.813670   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:38:27.813747   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:38:27.813763   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:38:27.813745   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:38:27.813880   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:27.838519   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:38:27.838542   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:38:27.838565   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:38:27.838595   95759 start.go:360] acquireMachinesLock for ha-984158-m02: {Name:mk33ccd18791cf0a87d18f7af68677fa10224c04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:38:27.838659   95759 start.go:364] duration metric: took 44.758µs to acquireMachinesLock for "ha-984158-m02"
	I0919 22:38:27.838683   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:38:27.838692   95759 fix.go:54] fixHost starting: m02
	I0919 22:38:27.838992   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:38:27.861121   95759 fix.go:112] recreateIfNeeded on ha-984158-m02: state=Stopped err=<nil>
	W0919 22:38:27.861152   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:38:27.863184   95759 out.go:252] * Restarting existing docker container for "ha-984158-m02" ...
	I0919 22:38:27.863257   95759 cli_runner.go:164] Run: docker start ha-984158-m02
	I0919 22:38:28.125822   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:38:28.146346   95759 kic.go:430] container "ha-984158-m02" state is running.
	I0919 22:38:28.146733   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:28.168173   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:28.168475   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:38:28.168559   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:28.189073   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:28.189415   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:28.189432   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:38:28.190241   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45924->127.0.0.1:32818: read: connection reset by peer
	I0919 22:38:31.326317   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:38:31.326343   95759 ubuntu.go:182] provisioning hostname "ha-984158-m02"
	I0919 22:38:31.326396   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.346064   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:31.346303   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:31.346317   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m02 && echo "ha-984158-m02" | sudo tee /etc/hostname
	I0919 22:38:31.495830   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:38:31.495906   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.515009   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:31.515247   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:31.515266   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:38:31.654008   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:38:31.654036   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:38:31.654057   95759 ubuntu.go:190] setting up certificates
	I0919 22:38:31.654067   95759 provision.go:84] configureAuth start
	I0919 22:38:31.654148   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:31.672869   95759 provision.go:143] copyHostCerts
	I0919 22:38:31.672912   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:31.672970   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:38:31.672984   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:31.673073   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:38:31.673199   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:31.673230   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:38:31.673241   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:31.673286   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:38:31.673375   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:31.673403   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:38:31.673410   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:31.673450   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:38:31.673525   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m02 san=[127.0.0.1 192.168.49.3 ha-984158-m02 localhost minikube]
	I0919 22:38:31.832848   95759 provision.go:177] copyRemoteCerts
	I0919 22:38:31.832920   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:38:31.832966   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.850721   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:31.949325   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:38:31.949391   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:38:31.976597   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:38:31.976650   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:38:32.002584   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:38:32.002653   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:38:32.035331   95759 provision.go:87] duration metric: took 381.249624ms to configureAuth
	I0919 22:38:32.035366   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:38:32.035610   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:32.035718   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.058439   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:32.058702   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:32.058739   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:38:32.484521   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:38:32.484550   95759 machine.go:96] duration metric: took 4.316059426s to provisionDockerMachine
	I0919 22:38:32.484563   95759 start.go:293] postStartSetup for "ha-984158-m02" (driver="docker")
	I0919 22:38:32.484576   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:38:32.484635   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:38:32.484697   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.510926   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.619996   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:38:32.629566   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:38:32.629676   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:38:32.629727   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:38:32.629764   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:38:32.629806   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:38:32.629922   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:38:32.630086   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:38:32.630147   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:38:32.630353   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:38:32.645004   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:32.675202   95759 start.go:296] duration metric: took 190.622889ms for postStartSetup
	I0919 22:38:32.675288   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:32.675327   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.697580   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.795763   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:38:32.801249   95759 fix.go:56] duration metric: took 4.962547133s for fixHost
	I0919 22:38:32.801275   95759 start.go:83] releasing machines lock for "ha-984158-m02", held for 4.962602853s
	I0919 22:38:32.801364   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:32.827878   95759 out.go:179] * Found network options:
	I0919 22:38:32.829587   95759 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:38:32.830969   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:38:32.831030   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:38:32.831146   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:38:32.831196   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.831204   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:38:32.831253   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.853448   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.853718   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:33.150612   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:38:33.160301   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:33.176730   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:38:33.176815   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:33.191328   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:38:33.191364   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:38:33.191416   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:38:33.191485   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:38:33.213815   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:38:33.231542   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:38:33.231635   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:38:33.247095   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:38:33.260329   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:38:33.380840   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:38:33.498308   95759 docker.go:234] disabling docker service ...
	I0919 22:38:33.498382   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:38:33.517853   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:38:33.536133   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:38:33.652463   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:38:33.761899   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:38:33.774677   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:38:33.793915   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:38:33.793969   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.804996   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:38:33.805057   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.816056   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.827802   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.840124   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:38:33.850301   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.861287   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.871826   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.883496   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:38:33.893950   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:38:33.906440   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:34.043971   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:39:04.326209   95759 ssh_runner.go:235] Completed: sudo systemctl restart crio: (30.282202499s)
	I0919 22:39:04.326243   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:39:04.326297   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:39:04.330226   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:39:04.330288   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:39:04.334075   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:39:04.369702   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:39:04.369800   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:04.406718   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:04.445793   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:39:04.446931   95759 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:39:04.448076   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:39:04.466313   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:39:04.470940   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:04.487515   95759 mustload.go:65] Loading cluster: ha-984158
	I0919 22:39:04.487734   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:04.487986   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:39:04.509829   95759 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:39:04.510158   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.3
	I0919 22:39:04.510174   95759 certs.go:194] generating shared ca certs ...
	I0919 22:39:04.510188   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:39:04.510345   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:39:04.510395   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:39:04.510409   95759 certs.go:256] generating profile certs ...
	I0919 22:39:04.510508   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:39:04.510584   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.84abfbbb
	I0919 22:39:04.510636   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:39:04.510651   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:39:04.510678   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:39:04.510696   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:39:04.510717   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:39:04.510733   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:39:04.510752   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:39:04.510781   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:39:04.510806   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:39:04.510875   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:39:04.510915   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:39:04.510928   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:39:04.510960   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:39:04.510988   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:39:04.511020   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:39:04.511077   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:04.511136   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:04.511156   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:39:04.511176   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:39:04.511229   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:39:04.532173   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:39:04.620518   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:39:04.624965   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:39:04.638633   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:39:04.642459   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:39:04.656462   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:39:04.660491   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:39:04.673947   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:39:04.678496   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:39:04.694022   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:39:04.698129   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:39:04.711457   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:39:04.715160   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:39:04.729617   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:39:04.756565   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:39:04.783062   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:39:04.808557   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:39:04.834684   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:39:04.860337   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:39:04.887473   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:39:04.913478   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:39:04.941337   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:39:04.967151   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:39:04.994669   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:39:05.028238   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:39:05.050978   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:39:05.073833   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:39:05.097285   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:39:05.120404   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:39:05.142847   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:39:05.163160   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:39:05.184053   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:39:05.190286   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:39:05.200925   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.204978   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.205054   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.211914   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:39:05.222874   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:39:05.234900   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.238900   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.238947   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.246276   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:39:05.255894   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:39:05.266269   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.270313   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.270382   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.278196   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:39:05.287746   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:39:05.291476   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:39:05.298503   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:39:05.305486   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:39:05.312720   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:39:05.319784   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:39:05.327527   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:39:05.334693   95759 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0919 22:39:05.334792   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:39:05.334818   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:39:05.334851   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:39:05.347510   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:05.347572   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:39:05.347618   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:39:05.356984   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:39:05.357056   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:39:05.367597   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:39:05.387861   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:39:05.406815   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:39:05.427878   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:39:05.432487   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:05.444804   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:05.548051   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:05.560978   95759 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:39:05.561299   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:05.563075   95759 out.go:179] * Verifying Kubernetes components...
	I0919 22:39:05.564716   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:05.672434   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:05.689063   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:39:05.689191   95759 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:39:05.689392   95759 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m02" to be "Ready" ...
	I0919 22:39:05.698088   95759 node_ready.go:49] node "ha-984158-m02" is "Ready"
	I0919 22:39:05.698164   95759 node_ready.go:38] duration metric: took 8.753764ms for node "ha-984158-m02" to be "Ready" ...
	I0919 22:39:05.698182   95759 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:39:05.698299   95759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:05.711300   95759 api_server.go:72] duration metric: took 150.274321ms to wait for apiserver process to appear ...
	I0919 22:39:05.711326   95759 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:39:05.711345   95759 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:39:05.716499   95759 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:39:05.717555   95759 api_server.go:141] control plane version: v1.34.0
	I0919 22:39:05.717586   95759 api_server.go:131] duration metric: took 6.25291ms to wait for apiserver health ...
	I0919 22:39:05.717595   95759 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:39:05.724069   95759 system_pods.go:59] 24 kube-system pods found
	I0919 22:39:05.724156   95759 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.724172   95759 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.724180   95759 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:05.724186   95759 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:05.724191   95759 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:39:05.724196   95759 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:05.724201   95759 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:05.724210   95759 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:39:05.724219   95759 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:05.724226   95759 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:05.724233   95759 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:39:05.724241   95759 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:05.724248   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:05.724256   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:39:05.724262   95759 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:05.724268   95759 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:05.724277   95759 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:39:05.724285   95759 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:05.724293   95759 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:05.724298   95759 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:39:05.724303   95759 system_pods.go:61] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:05.724308   95759 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:05.724317   95759 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:05.724325   95759 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:39:05.724338   95759 system_pods.go:74] duration metric: took 6.735402ms to wait for pod list to return data ...
	I0919 22:39:05.724355   95759 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:39:05.728216   95759 default_sa.go:45] found service account: "default"
	I0919 22:39:05.728243   95759 default_sa.go:55] duration metric: took 3.879783ms for default service account to be created ...
	I0919 22:39:05.728256   95759 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:39:05.733903   95759 system_pods.go:86] 24 kube-system pods found
	I0919 22:39:05.733937   95759 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.733945   95759 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.733951   95759 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:05.733954   95759 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:05.733958   95759 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:39:05.733961   95759 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:05.733964   95759 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:05.733969   95759 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:39:05.733973   95759 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:05.733976   95759 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:05.733979   95759 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:39:05.733982   95759 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:05.733986   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:05.733990   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:39:05.733993   95759 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:05.733995   95759 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:05.733999   95759 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:39:05.734007   95759 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:05.734010   95759 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:05.734013   95759 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:39:05.734016   95759 system_pods.go:89] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:05.734019   95759 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:05.734022   95759 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:05.734025   95759 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:39:05.734035   95759 system_pods.go:126] duration metric: took 5.77298ms to wait for k8s-apps to be running ...
	I0919 22:39:05.734044   95759 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:39:05.734085   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:05.746589   95759 system_svc.go:56] duration metric: took 12.533548ms WaitForService to wait for kubelet
	I0919 22:39:05.746629   95759 kubeadm.go:578] duration metric: took 185.605298ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:39:05.746655   95759 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:39:05.750196   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750221   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750233   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750236   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750240   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750242   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750246   95759 node_conditions.go:105] duration metric: took 3.586256ms to run NodePressure ...
	I0919 22:39:05.750259   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:39:05.750286   95759 start.go:255] writing updated cluster config ...
	I0919 22:39:05.752610   95759 out.go:203] 
	I0919 22:39:05.754285   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:05.754392   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:05.756186   95759 out.go:179] * Starting "ha-984158-m03" control-plane node in "ha-984158" cluster
	I0919 22:39:05.757628   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:39:05.758862   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:39:05.760172   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:39:05.760197   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:39:05.760252   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:39:05.760314   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:39:05.760332   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:39:05.760441   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:05.782434   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:39:05.782456   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:39:05.782471   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:39:05.782504   95759 start.go:360] acquireMachinesLock for ha-984158-m03: {Name:mkf33267bff56ae1cde0b805408b7f6393558146 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:05.782575   95759 start.go:364] duration metric: took 49.512µs to acquireMachinesLock for "ha-984158-m03"
	I0919 22:39:05.782600   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:05.782610   95759 fix.go:54] fixHost starting: m03
	I0919 22:39:05.782826   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:39:05.800849   95759 fix.go:112] recreateIfNeeded on ha-984158-m03: state=Stopped err=<nil>
	W0919 22:39:05.800880   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:05.803272   95759 out.go:252] * Restarting existing docker container for "ha-984158-m03" ...
	I0919 22:39:05.803361   95759 cli_runner.go:164] Run: docker start ha-984158-m03
	I0919 22:39:06.059506   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:39:06.078641   95759 kic.go:430] container "ha-984158-m03" state is running.
	I0919 22:39:06.079004   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:06.099001   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:06.099262   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:06.099315   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:06.117915   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:06.118166   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:06.118181   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:06.118862   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49366->127.0.0.1:32823: read: connection reset by peer
	I0919 22:39:09.258735   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:39:09.258764   95759 ubuntu.go:182] provisioning hostname "ha-984158-m03"
	I0919 22:39:09.258824   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.277807   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:09.278027   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:09.278041   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m03 && echo "ha-984158-m03" | sudo tee /etc/hostname
	I0919 22:39:09.428956   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:39:09.429040   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.447284   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:09.447535   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:09.447560   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:39:09.593539   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:39:09.593573   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:39:09.593598   95759 ubuntu.go:190] setting up certificates
	I0919 22:39:09.593609   95759 provision.go:84] configureAuth start
	I0919 22:39:09.593674   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:09.617495   95759 provision.go:143] copyHostCerts
	I0919 22:39:09.617537   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:39:09.617594   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:39:09.617607   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:39:09.617684   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:39:09.617811   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:39:09.617846   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:39:09.617853   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:39:09.618482   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:39:09.618632   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:39:09.618662   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:39:09.618671   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:39:09.618706   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:39:09.618780   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m03 san=[127.0.0.1 192.168.49.4 ha-984158-m03 localhost minikube]
	I0919 22:39:09.838307   95759 provision.go:177] copyRemoteCerts
	I0919 22:39:09.838429   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:39:09.838478   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.863933   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:09.983312   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:39:09.983424   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:39:10.021925   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:39:10.022008   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:39:10.063154   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:39:10.063276   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:39:10.104760   95759 provision.go:87] duration metric: took 511.137266ms to configureAuth
	I0919 22:39:10.104795   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:39:10.105072   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:10.105290   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.130112   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:10.130385   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:10.130414   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:39:10.533816   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:39:10.533844   95759 machine.go:96] duration metric: took 4.434568252s to provisionDockerMachine
	I0919 22:39:10.533858   95759 start.go:293] postStartSetup for "ha-984158-m03" (driver="docker")
	I0919 22:39:10.533871   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:39:10.533932   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:39:10.533966   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.553604   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.653755   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:39:10.657424   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:39:10.657456   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:39:10.657463   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:39:10.657469   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:39:10.657479   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:39:10.657531   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:39:10.657598   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:39:10.657608   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:39:10.657691   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:39:10.667261   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:10.700579   95759 start.go:296] duration metric: took 166.704996ms for postStartSetup
	I0919 22:39:10.700662   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:10.700704   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.728418   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.830886   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:39:10.836158   95759 fix.go:56] duration metric: took 5.053541909s for fixHost
	I0919 22:39:10.836186   95759 start.go:83] releasing machines lock for "ha-984158-m03", held for 5.053597855s
	I0919 22:39:10.836256   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:10.859049   95759 out.go:179] * Found network options:
	I0919 22:39:10.860801   95759 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:39:10.862070   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862112   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862141   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862155   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:39:10.862232   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:39:10.862282   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.862297   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:39:10.862360   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.885568   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.886944   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:11.122339   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:39:11.127789   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:39:11.138248   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:39:11.138341   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:39:11.147671   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:39:11.147698   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:39:11.147735   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:39:11.147774   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:39:11.160936   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:39:11.174826   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:39:11.174888   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:39:11.190348   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:39:11.203116   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:39:11.321919   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:39:11.432545   95759 docker.go:234] disabling docker service ...
	I0919 22:39:11.432608   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:39:11.446263   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:39:11.458056   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:39:11.572334   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:39:11.685921   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:39:11.698336   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:39:11.718031   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:39:11.718164   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.731929   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:39:11.732016   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.743385   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.755175   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.766807   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:39:11.779733   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.791806   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.802833   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.813877   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:39:11.824761   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:39:11.835392   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:11.940776   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:39:12.206168   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:39:12.206252   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:39:12.210177   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:39:12.210235   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:39:12.213924   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:39:12.250824   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:39:12.250899   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:12.288367   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:12.331200   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:39:12.332776   95759 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:39:12.334399   95759 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:39:12.335764   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:39:12.353568   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:39:12.357576   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:12.370671   95759 mustload.go:65] Loading cluster: ha-984158
	I0919 22:39:12.370930   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:12.371317   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:39:12.389760   95759 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:39:12.390003   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.4
	I0919 22:39:12.390016   95759 certs.go:194] generating shared ca certs ...
	I0919 22:39:12.390030   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:39:12.390204   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:39:12.390274   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:39:12.390289   95759 certs.go:256] generating profile certs ...
	I0919 22:39:12.390403   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:39:12.390484   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7
	I0919 22:39:12.390533   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:39:12.390549   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:39:12.390568   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:39:12.390585   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:39:12.390601   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:39:12.390614   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:39:12.390628   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:39:12.390641   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:39:12.390653   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:39:12.390711   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:39:12.390749   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:39:12.390761   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:39:12.390789   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:39:12.390812   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:39:12.390832   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:39:12.390871   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:12.390895   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:12.390910   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:39:12.390923   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:39:12.390971   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:39:12.408363   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:39:12.497500   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:39:12.501626   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:39:12.514736   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:39:12.518842   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:39:12.534226   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:39:12.538486   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:39:12.551906   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:39:12.555555   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:39:12.568778   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:39:12.573237   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:39:12.587524   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:39:12.591646   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:39:12.605021   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:39:12.632905   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:39:12.658562   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:39:12.685222   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:39:12.710986   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:39:12.742821   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:39:12.774649   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:39:12.808068   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:39:12.840999   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:39:12.873033   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:39:12.904176   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:39:12.935469   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:39:12.958451   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:39:12.983716   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:39:13.006372   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:39:13.026634   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:39:13.048003   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:39:13.067093   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:39:13.091242   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:39:13.097309   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:39:13.107657   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.111389   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.111438   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.118417   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:39:13.129698   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:39:13.140452   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.144194   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.144245   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.151266   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:39:13.161188   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:39:13.171891   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.176332   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.176413   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.184138   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:39:13.193625   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:39:13.197577   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:39:13.204628   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:39:13.211553   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:39:13.218449   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:39:13.225712   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:39:13.232770   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:39:13.239778   95759 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0919 22:39:13.239885   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:39:13.239907   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:39:13.239943   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:39:13.252386   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:13.252462   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:39:13.252520   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:39:13.261653   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:39:13.261771   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:39:13.271379   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:39:13.292763   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:39:13.314362   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:39:13.334791   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:39:13.338371   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:13.350977   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:13.456433   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:13.469559   95759 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:39:13.469884   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:13.472456   95759 out.go:179] * Verifying Kubernetes components...
	I0919 22:39:13.474707   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:13.588742   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:13.602600   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:39:13.602666   95759 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:39:13.602869   95759 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m03" to be "Ready" ...
	I0919 22:39:13.605956   95759 node_ready.go:49] node "ha-984158-m03" is "Ready"
	I0919 22:39:13.605979   95759 node_ready.go:38] duration metric: took 3.097172ms for node "ha-984158-m03" to be "Ready" ...
	I0919 22:39:13.605993   95759 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:39:13.606032   95759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:13.618211   95759 api_server.go:72] duration metric: took 148.610181ms to wait for apiserver process to appear ...
	I0919 22:39:13.618235   95759 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:39:13.618251   95759 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:39:13.622760   95759 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:39:13.623811   95759 api_server.go:141] control plane version: v1.34.0
	I0919 22:39:13.623838   95759 api_server.go:131] duration metric: took 5.597306ms to wait for apiserver health ...
	I0919 22:39:13.623847   95759 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:39:13.632153   95759 system_pods.go:59] 24 kube-system pods found
	I0919 22:39:13.632182   95759 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:39:13.632190   95759 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:13.632196   95759 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:13.632200   95759 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:13.632207   95759 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:39:13.632210   95759 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:13.632214   95759 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:13.632216   95759 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:39:13.632219   95759 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:13.632229   95759 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:13.632233   95759 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:39:13.632237   95759 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:13.632241   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:13.632247   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:39:13.632253   95759 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:13.632256   95759 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:13.632259   95759 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:39:13.632261   95759 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:13.632264   95759 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:13.632274   95759 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:39:13.632277   95759 system_pods.go:61] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:13.632282   95759 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:13.632285   95759 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:13.632288   95759 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:39:13.632295   95759 system_pods.go:74] duration metric: took 8.442512ms to wait for pod list to return data ...
	I0919 22:39:13.632305   95759 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:39:13.635316   95759 default_sa.go:45] found service account: "default"
	I0919 22:39:13.635337   95759 default_sa.go:55] duration metric: took 3.026488ms for default service account to be created ...
	I0919 22:39:13.635346   95759 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:39:13.733862   95759 system_pods.go:86] 24 kube-system pods found
	I0919 22:39:13.733908   95759 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:39:13.733922   95759 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:13.733929   95759 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:13.733937   95759 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:13.733945   95759 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:39:13.733952   95759 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:13.733958   95759 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:13.733964   95759 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:39:13.733969   95759 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:13.733974   95759 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:13.733985   95759 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:39:13.733995   95759 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:13.734001   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:13.734013   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:39:13.734018   95759 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:13.734021   95759 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:13.734024   95759 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:39:13.734027   95759 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:13.734033   95759 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:13.734044   95759 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:39:13.734052   95759 system_pods.go:89] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:13.734057   95759 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:13.734065   95759 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:13.734069   95759 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:39:13.734079   95759 system_pods.go:126] duration metric: took 98.726691ms to wait for k8s-apps to be running ...
	I0919 22:39:13.734091   95759 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:39:13.734175   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:13.747528   95759 system_svc.go:56] duration metric: took 13.410723ms WaitForService to wait for kubelet
	I0919 22:39:13.747570   95759 kubeadm.go:578] duration metric: took 277.970313ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:39:13.747595   95759 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:39:13.751576   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751598   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751610   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751613   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751616   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751619   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751622   95759 node_conditions.go:105] duration metric: took 4.023347ms to run NodePressure ...
	I0919 22:39:13.751634   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:39:13.751651   95759 start.go:255] writing updated cluster config ...
	I0919 22:39:13.753417   95759 out.go:203] 
	I0919 22:39:13.755135   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:13.755254   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:13.757081   95759 out.go:179] * Starting "ha-984158-m04" worker node in "ha-984158" cluster
	I0919 22:39:13.758394   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:39:13.759816   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:39:13.761015   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:39:13.761039   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:39:13.761051   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:39:13.761261   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:39:13.761304   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:39:13.761429   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:13.782360   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:39:13.782385   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:39:13.782406   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:39:13.782436   95759 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:13.782501   95759 start.go:364] duration metric: took 44.732µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:39:13.782524   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:13.782534   95759 fix.go:54] fixHost starting: m04
	I0919 22:39:13.782740   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:39:13.801027   95759 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Stopped err=<nil>
	W0919 22:39:13.801060   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:13.802864   95759 out.go:252] * Restarting existing docker container for "ha-984158-m04" ...
	I0919 22:39:13.802931   95759 cli_runner.go:164] Run: docker start ha-984158-m04
	I0919 22:39:14.055762   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:39:14.074848   95759 kic.go:430] container "ha-984158-m04" state is running.
	I0919 22:39:14.075262   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:39:14.094352   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:14.094594   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:14.094647   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:39:14.114064   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:14.114317   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0919 22:39:14.114330   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:14.114961   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50476->127.0.0.1:32828: read: connection reset by peer
	I0919 22:39:17.116460   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:20.118409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:23.120443   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:26.120776   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:29.121743   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:32.123258   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:35.125391   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:38.125915   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:41.126437   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:44.127525   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:47.128400   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:50.130402   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:53.132094   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:56.132448   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:59.133362   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:02.134004   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:05.136365   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:08.136767   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:11.137236   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:14.138295   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:17.139769   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:20.141642   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:23.143546   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:26.143966   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:29.144829   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:32.146423   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:35.148801   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:38.150005   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:41.150409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:44.150842   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:47.152406   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:50.154676   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:53.156471   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:56.157387   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:59.158366   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:02.160382   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:05.162387   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:08.162900   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:11.163385   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:14.164700   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:17.165484   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:20.167366   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:23.169809   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:26.170437   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:29.171409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:32.173443   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:35.175650   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:38.176984   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:41.177465   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:44.179757   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:47.181386   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:50.183757   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:53.185945   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:56.186445   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:59.187353   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:02.189451   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:05.191306   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:08.191935   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:11.192418   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:14.194206   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:42:14.194236   95759 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 22:42:14.194304   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.214461   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.214567   95759 machine.go:96] duration metric: took 3m0.119960942s to provisionDockerMachine
	I0919 22:42:14.214652   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:42:14.214684   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.238129   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.238280   95759 retry.go:31] will retry after 248.39527ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:14.487752   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.507066   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.507179   95759 retry.go:31] will retry after 241.490952ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:14.749696   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.769271   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.769394   95759 retry.go:31] will retry after 573.29064ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.342939   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.361305   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:15.361440   95759 retry.go:31] will retry after 493.546865ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.855177   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.876393   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:42:15.876503   95759 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:15.876520   95759 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.876565   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:42:15.876594   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.896632   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:15.896744   95759 retry.go:31] will retry after 211.367435ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.109288   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:16.130175   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:16.130270   95759 retry.go:31] will retry after 289.868834ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.420891   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:16.442472   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:16.442604   95759 retry.go:31] will retry after 547.590918ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.990359   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:17.008923   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:42:17.009049   95759 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:17.009064   95759 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:17.009073   95759 fix.go:56] duration metric: took 3m3.226540631s for fixHost
	I0919 22:42:17.009081   95759 start.go:83] releasing machines lock for "ha-984158-m04", held for 3m3.226570319s
	W0919 22:42:17.009092   95759 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:17.009191   95759 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:17.009203   95759 start.go:729] Will try again in 5 seconds ...
	I0919 22:42:22.010253   95759 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:42:22.010363   95759 start.go:364] duration metric: took 70.627µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:42:22.010395   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:42:22.010406   95759 fix.go:54] fixHost starting: m04
	I0919 22:42:22.010649   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:42:22.029262   95759 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Stopped err=<nil>
	W0919 22:42:22.029294   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:42:22.031096   95759 out.go:252] * Restarting existing docker container for "ha-984158-m04" ...
	I0919 22:42:22.031220   95759 cli_runner.go:164] Run: docker start ha-984158-m04
	I0919 22:42:22.294621   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:42:22.313475   95759 kic.go:430] container "ha-984158-m04" state is running.
	I0919 22:42:22.313799   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:42:22.333284   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:42:22.333514   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:42:22.333568   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:42:22.353907   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:42:22.354187   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0919 22:42:22.354204   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:42:22.354888   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51412->127.0.0.1:32833: read: connection reset by peer
	I0919 22:42:25.355457   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:28.356034   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:31.356407   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:34.358370   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:37.359693   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:40.360614   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:43.362397   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:46.363784   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:49.364408   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:52.366596   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:55.367888   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:58.369219   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:01.370395   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:04.371156   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:07.372724   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:10.373695   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:13.374908   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:16.375383   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:19.376388   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:22.378537   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:25.379508   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:28.380693   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:31.381372   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:34.383699   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:37.384935   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:40.385685   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:43.388048   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:46.388445   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:49.389657   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:52.391627   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:55.392687   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:58.393125   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:01.393619   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:04.395945   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:07.398372   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:10.398608   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:13.400912   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:16.401401   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:19.402479   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:22.404415   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:25.405562   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:28.406498   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:31.407755   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:34.410076   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:37.412454   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:40.413768   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:43.415168   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:46.416416   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:49.417399   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:52.419643   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:55.420363   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:58.420738   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:01.421609   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:04.423913   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:07.425430   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:10.426778   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:13.428381   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:16.429193   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:19.430490   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:22.432491   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:45:22.432543   95759 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 22:45:22.432609   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.452712   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.452777   95759 machine.go:96] duration metric: took 3m0.119250879s to provisionDockerMachine
	I0919 22:45:22.452858   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:45:22.452892   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.472911   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.473047   95759 retry.go:31] will retry after 202.283506ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:22.676548   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.694834   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.694965   95759 retry.go:31] will retry after 463.907197ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.159340   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.178560   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.178658   95759 retry.go:31] will retry after 365.232594ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.544210   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.564214   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:45:23.564366   95759 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:23.564390   95759 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.564449   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:45:23.564494   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.583703   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.583796   95759 retry.go:31] will retry after 343.872214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.928329   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.946762   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.946864   95759 retry.go:31] will retry after 341.564773ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.289296   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:24.312255   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:24.312369   95759 retry.go:31] will retry after 341.728488ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.655044   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:24.674698   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:45:24.674839   95759 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:24.674858   95759 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.674871   95759 fix.go:56] duration metric: took 3m2.664466794s for fixHost
	I0919 22:45:24.674881   95759 start.go:83] releasing machines lock for "ha-984158-m04", held for 3m2.664502957s
	W0919 22:45:24.674982   95759 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-984158" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.677468   95759 out.go:203] 
	W0919 22:45:24.678601   95759 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:24.678620   95759 out.go:285] * 
	W0919 22:45:24.680349   95759 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:45:24.681822   95759 out.go:203] 
	
	
	==> CRI-O <==
	Sep 19 22:38:34 ha-984158 crio[565]: time="2025-09-19 22:38:34.490551103Z" level=info msg="Starting container: b2cb38a999cac4269513a263840936a7f0a5f1ef129b45bd9f71e4b65f4c4a74" id=6d013997-4bc0-47b8-a2e4-8ad50a27feae name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:38:34 ha-984158 crio[565]: time="2025-09-19 22:38:34.498969531Z" level=info msg="Started container" PID=1368 containerID=b2cb38a999cac4269513a263840936a7f0a5f1ef129b45bd9f71e4b65f4c4a74 description=kube-system/coredns-66bc5c9577-ltjmz/coredns id=6d013997-4bc0-47b8-a2e4-8ad50a27feae name=/runtime.v1.RuntimeService/StartContainer sandboxID=815752732ad74ae8e5961e3c79b9a821b4903503b20978d661c98a6a36ef4b9d
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.902522587Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.906977791Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.907009772Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.907037293Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.911428136Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.911466965Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.911486751Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.915460017Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.915497091Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.915525773Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.919544523Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.919575130Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.012886161Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d4c92288-6a5e-4f04-96fc-76b8e890177a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.013169907Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d4c92288-6a5e-4f04-96fc-76b8e890177a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.013901636Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=be2f997d-9458-49d8-bca1-fcc18c2e9b9f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.014168511Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=be2f997d-9458-49d8-bca1-fcc18c2e9b9f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.018353225Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=00fb4cd8-8bf1-4b30-8398-7f8f2949db03 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.018511963Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.036610475Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8919f8bf0a44a05938e764851b8252bfdd952ff2d6aefa1882e35c8a0555438f/merged/etc/passwd: no such file or directory"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.036659847Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8919f8bf0a44a05938e764851b8252bfdd952ff2d6aefa1882e35c8a0555438f/merged/etc/group: no such file or directory"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.095888981Z" level=info msg="Created container f73602ecef49bd46313a999f2137eea9370c3511211c3961b8b8c90352ad183f: kube-system/storage-provisioner/storage-provisioner" id=00fb4cd8-8bf1-4b30-8398-7f8f2949db03 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.096561974Z" level=info msg="Starting container: f73602ecef49bd46313a999f2137eea9370c3511211c3961b8b8c90352ad183f" id=4af7ccf6-09cd-4a8b-a8a3-ab196defe346 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.104038077Z" level=info msg="Started container" PID=1741 containerID=f73602ecef49bd46313a999f2137eea9370c3511211c3961b8b8c90352ad183f description=kube-system/storage-provisioner/storage-provisioner id=4af7ccf6-09cd-4a8b-a8a3-ab196defe346 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c833b8c10762b8d7272f8c569836ab444d6d5b309d15da090c6b1664db70ed7c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f73602ecef49b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Running             storage-provisioner       3                   c833b8c10762b       storage-provisioner
	b2cb38a999cac       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   7 minutes ago       Running             coredns                   1                   815752732ad74       coredns-66bc5c9577-ltjmz
	676fc8265fa71       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   7 minutes ago       Running             busybox                   1                   853e9db2bdfa8       busybox-7b57f96db7-rnjl7
	7e1e5941c1568       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   7 minutes ago       Running             kindnet-cni               1                   547d271717250       kindnet-rd882
	c9027fdf07d43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   7 minutes ago       Exited              storage-provisioner       2                   c833b8c10762b       storage-provisioner
	a22f43664887c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   7 minutes ago       Running             kube-proxy                1                   d51eb4228f1eb       kube-proxy-hdxxn
	377f1c9e1defe       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   7 minutes ago       Running             coredns                   1                   e756edadac294       coredns-66bc5c9577-5gnbx
	55f2dff5151a8       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   7 minutes ago       Running             kube-apiserver            1                   0d488246e5b37       kube-apiserver-ha-984158
	79c74b643f5a5       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   7 minutes ago       Running             kube-scheduler            1                   8f2d6202aa772       kube-scheduler-ha-984158
	32b11c5432de7       765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23   7 minutes ago       Running             kube-vip                  0                   01eeb16fe8f46       kube-vip-ha-984158
	935ae0c237d97       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   7 minutes ago       Running             kube-controller-manager   1                   8871adc8c9755       kube-controller-manager-ha-984158
	13b67e56860f8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 minutes ago       Running             etcd                      1                   0fb5a565c96e5       etcd-ha-984158
	
	
	==> coredns [377f1c9e1defee6bb59c215f0a1a03ae29aa5b77855a39725abe9d88f4182f71] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47318 - 34366 "HINFO IN 8418387040146284568.7180250627065820856. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.092087824s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [b2cb38a999cac4269513a263840936a7f0a5f1ef129b45bd9f71e4b65f4c4a74] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47142 - 36068 "HINFO IN 3054302858159562754.8459958995054926466. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023807531s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-984158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:33:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:45:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-984158
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 ce0d9390578a44a698c3fda69fb20273
	  System UUID:                e5418393-d7bf-429a-8ff0-9daee26920dd
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rnjl7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-5gnbx             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 coredns-66bc5c9577-ltjmz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-ha-984158                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-rd882                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-984158             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-984158    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-hdxxn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-984158             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-984158                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m6s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 7m4s                   kube-proxy       
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           11m                    node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-984158 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           9m10s                  node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  Starting                 7m13s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m12s (x8 over 7m13s)  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m12s (x8 over 7m13s)  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m12s (x8 over 7m13s)  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m3s                   node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           7m3s                   node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           6m27s                  node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	
	
	Name:               ha-984158-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:45:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-984158-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 82b431cbd7af4c3f980669ae3ee3bdc5
	  System UUID:                370c0cbf-a33c-464e-aad2-0ef3d76b4ebb
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8s7jn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-984158-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-th979                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-984158-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-984158-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-plrn2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-984158-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-984158-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m29s                  kube-proxy       
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  RegisteredNode           11m                    node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  NodeHasNoDiskPressure    9m15s (x8 over 9m15s)  kubelet          Node ha-984158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m15s (x8 over 9m15s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m15s (x8 over 9m15s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           9m10s                  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  Starting                 7m11s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m11s (x8 over 7m11s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m11s (x8 over 7m11s)  kubelet          Node ha-984158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m11s (x8 over 7m11s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m3s                   node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           7m3s                   node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           6m27s                  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	
	
	==> dmesg <==
	[  +0.103037] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029723] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.096733] kauditd_printk_skb: 47 callbacks suppressed
	[Sep19 22:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.041768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.022949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023825] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	
	
	==> etcd [13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87] <==
	{"level":"info","ts":"2025-09-19T22:39:07.226913Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"e8495135083f8257","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:39:07.226991Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:39:07.240674Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:39:07.244098Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:39:07.597341Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e8495135083f8257","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-09-19T22:39:07.597413Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e8495135083f8257","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-09-19T22:45:30.851173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:45:30.877959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55454","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:45:30.889351Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(7185048267463743064 12593026477526642892)"}
	{"level":"info","ts":"2025-09-19T22:45:30.890632Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"e8495135083f8257","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-19T22:45:30.890677Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.890748Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:45:30.890772Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.890793Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:45:30.890802Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:45:30.890863Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.891077Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","error":"context canceled"}
	{"level":"warn","ts":"2025-09-19T22:45:30.891161Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e8495135083f8257","error":"failed to read e8495135083f8257 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-19T22:45:30.891186Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.891277Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:45:30.891350Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:45:30.891389Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:45:30.891424Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.898371Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.901479Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"e8495135083f8257"}
	
	
	==> kernel <==
	 22:45:39 up  1:28,  0 users,  load average: 0.30, 0.56, 0.57
	Linux ha-984158 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7e1e5941c1568be6947d5879f8b05807535d937790e13f1de20f69c7cb7f0ccd] <==
	I0919 22:44:54.902217       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:44:54.902415       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:44:54.902428       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:45:04.902949       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:45:04.902981       1 main.go:301] handling current node
	I0919 22:45:04.902997       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:45:04.903003       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:45:04.903212       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:45:04.903225       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:45:14.910562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:45:14.910592       1 main.go:301] handling current node
	I0919 22:45:14.910608       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:45:14.910612       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:45:14.910787       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:45:14.910796       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:45:24.910192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:45:24.910232       1 main.go:301] handling current node
	I0919 22:45:24.910253       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:45:24.910259       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:45:24.910469       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:45:24.910478       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:45:34.901935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:45:34.901974       1 main.go:301] handling current node
	I0919 22:45:34.901990       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:45:34.901994       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645] <==
	I0919 22:38:33.237483       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 22:38:33.237492       1 cache.go:39] Caches are synced for autoregister controller
	I0919 22:38:33.244473       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0919 22:38:33.256040       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0919 22:38:33.256074       1 policy_source.go:240] refreshing policies
	I0919 22:38:33.258725       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 22:38:33.330813       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 22:38:33.340553       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0919 22:38:33.343923       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0919 22:38:34.057940       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0919 22:38:34.123968       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 22:38:34.654257       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0919 22:38:36.563731       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:38:37.013446       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:39:07.528152       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0919 22:39:58.806991       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:39:59.831450       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:12.701181       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:22.300169       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:28.420805       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:42.481948       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:43.538989       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:45.026909       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:44:54.365379       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:45:11.122450       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba] <==
	I0919 22:38:36.560524       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:38:36.561755       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:38:36.563075       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0919 22:38:36.564243       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:38:36.565318       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 22:38:36.567600       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:38:36.567791       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 22:38:36.567913       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:38:36.568459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:38:36.568957       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:38:36.577191       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 22:38:36.580467       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:38:36.580630       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:38:36.580760       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158"
	I0919 22:38:36.580809       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m02"
	I0919 22:38:36.580815       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m03"
	I0919 22:38:36.580872       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:38:36.590818       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:39:15.982637       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-6rhpz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-6rhpz\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:39:15.983309       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"4dd58d83-a50d-4db8-9919-ac6b8b041c9e", APIVersion:"v1", ResourceVersion:"312", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-6rhpz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-6rhpz": the object has been modified; please apply your changes to the latest version and try again
	E0919 22:45:36.573357       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:45:36.573394       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:45:36.573400       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:45:36.573405       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:45:36.573411       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	
	
	==> kube-proxy [a22f43664887c7fcbb5c6716c9592a2cd654e455fd905f9edd287a2f6c9aba58] <==
	I0919 22:38:34.512575       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:38:34.579894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:38:34.680953       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:38:34.680992       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:38:34.681200       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:38:34.704454       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:38:34.704534       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:38:34.710440       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:38:34.710834       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:38:34.710880       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:38:34.712458       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:38:34.712504       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:38:34.712543       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:38:34.712552       1 config.go:309] "Starting node config controller"
	I0919 22:38:34.712564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:38:34.712555       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:38:34.712587       1 config.go:200] "Starting service config controller"
	I0919 22:38:34.712613       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:38:34.812688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:38:34.812708       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:38:34.812734       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:38:34.812768       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9] <==
	I0919 22:38:28.535240       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:38:33.134307       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 22:38:33.134372       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 22:38:33.134385       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:38:33.134394       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:38:33.174419       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:38:33.174609       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:38:33.180536       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:38:33.180680       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:38:33.184947       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:38:33.185091       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:38:33.284411       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:43:36 ha-984158 kubelet[720]: E0919 22:43:36.954234     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321816953914416  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:46 ha-984158 kubelet[720]: E0919 22:43:46.955354     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321826955128886  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:46 ha-984158 kubelet[720]: E0919 22:43:46.955393     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321826955128886  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:56 ha-984158 kubelet[720]: E0919 22:43:56.956480     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321836956221895  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:56 ha-984158 kubelet[720]: E0919 22:43:56.956517     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321836956221895  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:06 ha-984158 kubelet[720]: E0919 22:44:06.958459     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321846958195920  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:06 ha-984158 kubelet[720]: E0919 22:44:06.958501     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321846958195920  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:16 ha-984158 kubelet[720]: E0919 22:44:16.959975     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321856959733254  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:16 ha-984158 kubelet[720]: E0919 22:44:16.960016     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321856959733254  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:26 ha-984158 kubelet[720]: E0919 22:44:26.961918     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321866961564924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:26 ha-984158 kubelet[720]: E0919 22:44:26.961955     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321866961564924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:36 ha-984158 kubelet[720]: E0919 22:44:36.964584     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321876963854129  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:36 ha-984158 kubelet[720]: E0919 22:44:36.964626     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321876963854129  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:46 ha-984158 kubelet[720]: E0919 22:44:46.966592     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321886966345111  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:46 ha-984158 kubelet[720]: E0919 22:44:46.966634     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321886966345111  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:56 ha-984158 kubelet[720]: E0919 22:44:56.968415     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321896968168694  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:56 ha-984158 kubelet[720]: E0919 22:44:56.968455     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321896968168694  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:06 ha-984158 kubelet[720]: E0919 22:45:06.969597     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321906969346664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:06 ha-984158 kubelet[720]: E0919 22:45:06.969639     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321906969346664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:16 ha-984158 kubelet[720]: E0919 22:45:16.971464     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321916971187127  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:16 ha-984158 kubelet[720]: E0919 22:45:16.971505     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321916971187127  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:26 ha-984158 kubelet[720]: E0919 22:45:26.972696     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321926972495462  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:26 ha-984158 kubelet[720]: E0919 22:45:26.972734     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321926972495462  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:36 ha-984158 kubelet[720]: E0919 22:45:36.973935     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321936973692417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:36 ha-984158 kubelet[720]: E0919 22:45:36.973973     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321936973692417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-984158 -n ha-984158
helpers_test.go:269: (dbg) Run:  kubectl --context ha-984158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-qctnj
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DeleteSecondaryNode]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-984158 describe pod busybox-7b57f96db7-qctnj
helpers_test.go:290: (dbg) kubectl --context ha-984158 describe pod busybox-7b57f96db7-qctnj:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-qctnj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jf9wg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-jf9wg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  12s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  12s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  10s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  10s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  11s (x2 over 13s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeleteSecondaryNode (13.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.63s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:415: expected profile "ha-984158" in json of 'profile list' to have "Degraded" status but have "Starting" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-984158\",\"Status\":\"Starting\",\"Config\":{\"Name\":\"ha-984158\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1\",\"Memory\":3072,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"docker\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares
\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.34.0\",\"ClusterName\":\"ha-984158\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.168.49.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"crio\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.168.49.2\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":
\"192.168.49.3\",\"Port\":8443,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"crio\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m04\",\"IP\":\"192.168.49.5\",\"Port\":0,\"KubernetesVersion\":\"v1.34.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"amd-gpu-device-plugin\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubetail\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry
-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volcano\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"MountString\":\"\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"DisableCoreDNSLog\":false,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":
\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-linux-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-984158
helpers_test.go:243: (dbg) docker inspect ha-984158:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	        "Created": "2025-09-19T22:33:24.996172492Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 95956,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:38:20.505682313Z",
	            "FinishedAt": "2025-09-19T22:38:19.832335475Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hosts",
	        "LogPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca-json.log",
	        "Name": "/ha-984158",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-984158:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-984158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	                "LowerDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-984158",
	                "Source": "/var/lib/docker/volumes/ha-984158/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-984158",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-984158",
	                "name.minikube.sigs.k8s.io": "ha-984158",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8e4fdcd3468198deb98d4a8f23cbd640a198a460cfea4c64e865edb3f33eaab9",
	            "SandboxKey": "/var/run/docker/netns/8e4fdcd34681",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32813"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32814"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32817"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32815"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32816"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-984158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:4b:fa:16:2f:42",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1b6c79ac61dbabfd8f1ce8959ab9a2616212ddaf4680b1bb2cc7b6f6005d0e",
	                    "EndpointID": "b56ee79fb4c604077e565626768d3a9928d875fe4a72dd45dd22369025cf8f31",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-984158",
	                        "0e7c4b5cff2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-984158 -n ha-984158
helpers_test.go:252: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 logs -n 25: (1.182182224s)
helpers_test.go:260: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp testdata/cp-test.txt ha-984158-m04:/home/docker/cp-test.txt                                                             │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m04.txt │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m04_ha-984158.txt                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158.txt                                                 │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ node    │ ha-984158 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ node    │ ha-984158 node start m02 --alsologtostderr -v 5                                                                                      │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ node    │ ha-984158 node list --alsologtostderr -v 5                                                                                           │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ stop    │ ha-984158 stop --alsologtostderr -v 5                                                                                                │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:38 UTC │
	│ start   │ ha-984158 start --wait true --alsologtostderr -v 5                                                                                   │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ node    │ ha-984158 node list --alsologtostderr -v 5                                                                                           │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:45 UTC │                     │
	│ node    │ ha-984158 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:45 UTC │ 19 Sep 25 22:45 UTC │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:38:20
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:38:20.249865   95759 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:38:20.249988   95759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:20.249994   95759 out.go:374] Setting ErrFile to fd 2...
	I0919 22:38:20.250000   95759 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:38:20.250249   95759 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:38:20.250707   95759 out.go:368] Setting JSON to false
	I0919 22:38:20.251700   95759 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":4850,"bootTime":1758316650,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:38:20.251800   95759 start.go:140] virtualization: kvm guest
	I0919 22:38:20.254109   95759 out.go:179] * [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:38:20.255764   95759 notify.go:220] Checking for updates...
	I0919 22:38:20.255845   95759 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:38:20.257481   95759 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:38:20.259062   95759 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:20.260518   95759 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:38:20.262187   95759 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:38:20.263765   95759 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:38:20.265783   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:20.265907   95759 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:38:20.294398   95759 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:38:20.294613   95759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:20.361388   95759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:38:20.349869718 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:20.361497   95759 docker.go:318] overlay module found
	I0919 22:38:20.363722   95759 out.go:179] * Using the docker driver based on existing profile
	I0919 22:38:20.365305   95759 start.go:304] selected driver: docker
	I0919 22:38:20.365327   95759 start.go:918] validating driver "docker" against &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disab
leMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:20.365467   95759 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:38:20.365552   95759 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:38:20.420337   95759 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:0 ContainersPaused:0 ContainersStopped:4 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:38:20.409819419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:38:20.420989   95759 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:38:20.421017   95759 cni.go:84] Creating CNI manager for ""
	I0919 22:38:20.421096   95759 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:38:20.421172   95759 start.go:348] cluster config:
	{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:20.423543   95759 out.go:179] * Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	I0919 22:38:20.425622   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:38:20.427928   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:38:20.429486   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:20.429552   95759 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:38:20.429561   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:38:20.429624   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:38:20.429683   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:38:20.429696   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:38:20.429903   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:20.451753   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:38:20.451777   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:38:20.451800   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:38:20.451830   95759 start.go:360] acquireMachinesLock for ha-984158: {Name:mkc72a6d4fef468a73a10e88f019b77c34dadd97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:38:20.451903   95759 start.go:364] duration metric: took 52.261µs to acquireMachinesLock for "ha-984158"
	I0919 22:38:20.451929   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:38:20.451935   95759 fix.go:54] fixHost starting: 
	I0919 22:38:20.452267   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:38:20.470646   95759 fix.go:112] recreateIfNeeded on ha-984158: state=Stopped err=<nil>
	W0919 22:38:20.470675   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:38:20.473543   95759 out.go:252] * Restarting existing docker container for "ha-984158" ...
	I0919 22:38:20.473635   95759 cli_runner.go:164] Run: docker start ha-984158
	I0919 22:38:20.725924   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:38:20.747322   95759 kic.go:430] container "ha-984158" state is running.
	I0919 22:38:20.748445   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:20.768582   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:20.768847   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:38:20.768938   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:20.788669   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:20.788894   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:20.788907   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:38:20.789621   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:46262->127.0.0.1:32813: read: connection reset by peer
	I0919 22:38:23.928529   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:38:23.928563   95759 ubuntu.go:182] provisioning hostname "ha-984158"
	I0919 22:38:23.928620   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:23.947237   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:23.947447   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:23.947461   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158 && echo "ha-984158" | sudo tee /etc/hostname
	I0919 22:38:24.095390   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:38:24.095477   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.113617   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:24.113853   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:24.113878   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:38:24.249977   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:38:24.250008   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:38:24.250048   95759 ubuntu.go:190] setting up certificates
	I0919 22:38:24.250058   95759 provision.go:84] configureAuth start
	I0919 22:38:24.250116   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:24.268530   95759 provision.go:143] copyHostCerts
	I0919 22:38:24.268578   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:24.268614   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:38:24.268624   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:24.268699   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:38:24.268797   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:24.268816   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:38:24.268820   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:24.268848   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:38:24.268908   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:24.268928   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:38:24.268932   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:24.268959   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:38:24.269015   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158 san=[127.0.0.1 192.168.49.2 ha-984158 localhost minikube]
	I0919 22:38:24.530322   95759 provision.go:177] copyRemoteCerts
	I0919 22:38:24.530388   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:38:24.530429   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.549937   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:24.649314   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:38:24.649386   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:38:24.674567   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:38:24.674639   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0919 22:38:24.700190   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:38:24.700255   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:38:24.725998   95759 provision.go:87] duration metric: took 475.930644ms to configureAuth
	I0919 22:38:24.726025   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:38:24.726265   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:24.726378   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:24.744668   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:24.744868   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32813 <nil> <nil>}
	I0919 22:38:24.744887   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:38:25.041744   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:38:25.041773   95759 machine.go:96] duration metric: took 4.2729084s to provisionDockerMachine
	I0919 22:38:25.041790   95759 start.go:293] postStartSetup for "ha-984158" (driver="docker")
	I0919 22:38:25.041804   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:38:25.041885   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:38:25.041937   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.061613   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.158944   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:38:25.162445   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:38:25.162473   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:38:25.162481   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:38:25.162487   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:38:25.162497   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:38:25.162543   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:38:25.162612   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:38:25.162622   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:38:25.162697   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:38:25.171420   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:25.196548   95759 start.go:296] duration metric: took 154.74522ms for postStartSetup
	I0919 22:38:25.196622   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:25.196658   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.214818   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.307266   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:38:25.311757   95759 fix.go:56] duration metric: took 4.859817354s for fixHost
	I0919 22:38:25.311786   95759 start.go:83] releasing machines lock for "ha-984158", held for 4.859867111s
	I0919 22:38:25.311855   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:38:25.331292   95759 ssh_runner.go:195] Run: cat /version.json
	I0919 22:38:25.331342   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.331445   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:38:25.331519   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:38:25.350964   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.351259   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:38:25.521285   95759 ssh_runner.go:195] Run: systemctl --version
	I0919 22:38:25.525969   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:38:25.668131   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:38:25.673196   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:25.683302   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:38:25.683463   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:25.693199   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:38:25.693229   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:38:25.693261   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:38:25.693301   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:38:25.705935   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:38:25.717521   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:38:25.717575   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:38:25.730590   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:38:25.742679   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:38:25.806884   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:38:25.876321   95759 docker.go:234] disabling docker service ...
	I0919 22:38:25.876399   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:38:25.889742   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:38:25.902299   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:38:25.968552   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:38:26.035171   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:38:26.047090   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:38:26.063771   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:38:26.063823   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.074242   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:38:26.074296   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.085364   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.096159   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.106569   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:38:26.116384   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.127163   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.138533   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:26.149140   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:38:26.157845   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:38:26.166573   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:26.230447   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:38:26.333573   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:38:26.333644   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:38:26.337977   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:38:26.338040   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:38:26.341911   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:38:26.375206   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:38:26.375273   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:38:26.410086   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:38:26.448363   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:38:26.449629   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:38:26.467494   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:38:26.471488   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:38:26.484310   95759 kubeadm.go:875] updating cluster {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:fal
se DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:38:26.484505   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:26.484557   95759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:38:26.531218   95759 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:38:26.531242   95759 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:38:26.531296   95759 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:38:26.567181   95759 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:38:26.567205   95759 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:38:26.567217   95759 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:38:26.567354   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:38:26.567443   95759 ssh_runner.go:195] Run: crio config
	I0919 22:38:26.612533   95759 cni.go:84] Creating CNI manager for ""
	I0919 22:38:26.612558   95759 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0919 22:38:26.612573   95759 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:38:26.612596   95759 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-984158 NodeName:ha-984158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:38:26.612731   95759 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-984158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:38:26.612751   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:38:26.612791   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:38:26.625916   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:26.626026   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:38:26.626083   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:38:26.636322   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:38:26.636382   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:38:26.645958   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0919 22:38:26.665184   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:38:26.684627   95759 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0919 22:38:26.703734   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:38:26.722194   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:38:26.726033   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:38:26.737748   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:26.802332   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:38:26.828015   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.2
	I0919 22:38:26.828140   95759 certs.go:194] generating shared ca certs ...
	I0919 22:38:26.828156   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:26.828370   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:38:26.828426   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:38:26.828439   95759 certs.go:256] generating profile certs ...
	I0919 22:38:26.828533   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:38:26.828559   95759 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24
	I0919 22:38:26.828573   95759 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.4 192.168.49.254]
	I0919 22:38:27.179556   95759 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 ...
	I0919 22:38:27.179596   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24: {Name:mk0ca61656ed051ffa5dbf8b847da7c47b965f66 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.179810   95759 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24 ...
	I0919 22:38:27.179828   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24: {Name:mk16b6aae6417eca80799eff0a4c27dc0860bcd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.179937   95759 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.b5848c24 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:38:27.180098   95759 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.b5848c24 -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:38:27.180260   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:38:27.180276   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:38:27.180289   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:38:27.180307   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:38:27.180321   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:38:27.180334   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:38:27.180354   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:38:27.180364   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:38:27.180373   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:38:27.180419   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:38:27.180445   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:38:27.180454   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:38:27.180474   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:38:27.180497   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:38:27.180517   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:38:27.180557   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:27.180607   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.180624   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.180637   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.181195   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:38:27.209358   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:38:27.235624   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:38:27.260629   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:38:27.286335   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:38:27.312745   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:38:27.340226   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:38:27.366125   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:38:27.395452   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:38:27.424801   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:38:27.463750   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:38:27.502091   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:38:27.530600   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:38:27.538166   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:38:27.552357   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.559014   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.559181   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:38:27.569405   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:38:27.582829   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:38:27.597217   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.602410   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.602472   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:38:27.610784   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:38:27.624272   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:38:27.635899   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.640089   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.640162   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:38:27.647669   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:38:27.657702   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:38:27.661673   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:38:27.669449   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:38:27.676756   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:38:27.683701   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:38:27.690945   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:38:27.698327   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:38:27.705328   95759 kubeadm.go:392] StartCluster: {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP: Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:38:27.705437   95759 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:38:27.705491   95759 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:38:27.743232   95759 cri.go:89] found id: "55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645"
	I0919 22:38:27.743258   95759 cri.go:89] found id: "79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9"
	I0919 22:38:27.743263   95759 cri.go:89] found id: "32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3"
	I0919 22:38:27.743269   95759 cri.go:89] found id: "935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba"
	I0919 22:38:27.743273   95759 cri.go:89] found id: "13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87"
	I0919 22:38:27.743277   95759 cri.go:89] found id: ""
	I0919 22:38:27.743327   95759 ssh_runner.go:195] Run: sudo runc list -f json
	I0919 22:38:27.766931   95759 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87","pid":859,"status":"running","bundle":"/run/containers/storage/overlay-containers/13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87/userdata","rootfs":"/var/lib/containers/storage/overlay/442db62cd7567e3c806501d825c6c5d23003b614741e7fbf0e795a362ea67a21/merged","created":"2025-09-19T22:38:27.457722678Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"n
ame\\\":\\\"probe-port\\\",\\\"hostPort\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.401544575Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b69a60c29223d
c4628f1e45acc16ccdb\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-984158_b69a60c29223dc4628f1e45acc16ccdb/etcd/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/442db62cd7567e3c806501d825c6c5d23003b614741e7fbf0e795a362ea67a21/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0fb5a565c96e537910c2f0be84cba5e78d505d3fc126b65c22ff047a404b942a/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0fb5a565c96e537910c2f0be84cba5e78d505d3fc126b65c22ff047a404b942a","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"
/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/containers/etcd/ee72b99d\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b69a60c29223dc4628f1e45acc16ccdb","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"b69a60c29223dc4628f1e45acc16ccdb","kub
ernetes.io/config.seen":"2025-09-19T22:38:26.901880352Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3","pid":878,"status":"running","bundle":"/run/containers/storage/overlay-containers/32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3/userdata","rootfs":"/var/lib/containers/storage/overlay/72e57a2592f75caf73cfa22398d5c5c23f84604ab07514c7bceaf51f91d603f5/merged","created":"2025-09-19T22:38:27.465010624Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"0","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMe
ssagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.416092699Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17c8e4bb
866faa0106347d8b7bccd341\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-vip-ha-984158_17c8e4bb866faa0106347d8b7bccd341/kube-vip/0.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\"}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/72e57a2592f75caf73cfa22398d5c5c23f84604ab07514c7bceaf51f91d603f5/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_0","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/01eeb16fe8f462df27f16cc298e1b9267fc8916156571e710626134b712b0cbe/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"01eeb16fe8f462df27f16cc298e1b9267fc8916156571e710626134b712b0cbe","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"cont
ainer_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/17c8e4bb866faa0106347d8b7bccd341/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/17c8e4bb866faa0106347d8b7bccd341/containers/kube-vip/a6d77d36\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.hash":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.seen":"2025-09-19T22:38:26.901891443Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd
.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645","pid":954,"status":"running","bundle":"/run/containers/storage/overlay-containers/55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645/userdata","rootfs":"/var/lib/containers/storage/overlay/118384c8d6dc773d29b1dc159de9c9ee23b8eaeb8bcc8413b688fa07b21abc09/merged","created":"2025-09-19T22:38:27.515032823Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.
hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.443516596Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-98415
8\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a8e2ca3a88a914207b16de44248445e2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-984158_a8e2ca3a88a914207b16de44248445e2/kube-apiserver/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/118384c8d6dc773d29b1dc159de9c9ee23b8eaeb8bcc8413b688fa07b21abc09/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/0d488246e5b370f4828f5c11e5390777cc4cb5ea84090c958d6b601b35235de5/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"0d488246e5b370f4828f5c11e5390777cc4cb5ea84090c958d6b601b35235de5","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kuberne
tes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/containers/kube-apiserver/d0001fc3\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"hos
t_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a8e2ca3a88a914207b16de44248445e2","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"a8e2ca3a88a914207b16de44248445e2","kubernetes.io/config.seen":"2025-09-19T22:38:26.901886915Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"79c74b643f5a5959b25d582e997875f3399705b
3da970e161badc0d1521410a9","pid":921,"status":"running","bundle":"/run/containers/storage/overlay-containers/79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9/userdata","rootfs":"/var/lib/containers/storage/overlay/fc06cd1000c85e9cd4673a36b81650123792de7d25d573330b62dfab20204623/merged","created":"2025-09-19T22:38:27.502254065Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.ku
bernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.438041518Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17a21a02ffe1f8dd7b43dae71452cdad\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-
scheduler-ha-984158_17a21a02ffe1f8dd7b43dae71452cdad/kube-scheduler/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/fc06cd1000c85e9cd4673a36b81650123792de7d25d573330b62dfab20204623/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8f2d6202aa772c3f9122a164a8b2d4d7ee64338d9bc1d0ea92d9989d81da3a27/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8f2d6202aa772c3f9122a164a8b2d4d7ee64338d9bc1d0ea92d9989d81da3a27","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\"
:\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/containers/kube-scheduler/6dc9da94\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.hash":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.seen":"2025-09-19T22:38:26.901890185Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDepen
dencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba","pid":903,"status":"running","bundle":"/run/containers/storage/overlay-containers/935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba/userdata","rootfs":"/var/lib/containers/storage/overlay/294f08962cf3b85109646e67c49c8e611f769c418e606db4b191cb3508ca3407/merged","created":"2025-09-19T22:38:27.483620953Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"1","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"7e
aa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"1\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:38:27.414415487Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"kube-controlle
r-manager-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"560e6b05a580a11369967b27d393af16\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-984158_560e6b05a580a11369967b27d393af16/kube-controller-manager/1.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":1}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/294f08962cf3b85109646e67c49c8e611f769c418e606db4b191cb3508ca3407/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-984158_kube-system_560e6b05a580a11369967b27d393af16_1","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/8871adc8c975575b11386f10c2278ccafbe420230c4e6fe1c76b13467b620c80/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"8871adc8c975575b11386f10c2278ccafbe420230c4e6fe1c76b13467b620c80","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-984158_kube-system_560e6b05a580a113699
67b27d393af16_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/containers/kube-controller-manager/e63161fc\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.conf\",\"readonl
y\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"560e6b05a580a11369967b27d393af16","kubernetes.io/config.hash":"560e6b05a580a11369967b27d393af16",
"kubernetes.io/config.seen":"2025-09-19T22:38:26.901888813Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0919 22:38:27.767290   95759 cri.go:126] list returned 5 containers
	I0919 22:38:27.767310   95759 cri.go:129] container: {ID:13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87 Status:running}
	I0919 22:38:27.767328   95759 cri.go:135] skipping {13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87 running}: state = "running", want "paused"
	I0919 22:38:27.767344   95759 cri.go:129] container: {ID:32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3 Status:running}
	I0919 22:38:27.767353   95759 cri.go:135] skipping {32b11c5432de7492bf977b4a43e0c738bbac59781405c8ea2d53fd7e448762c3 running}: state = "running", want "paused"
	I0919 22:38:27.767369   95759 cri.go:129] container: {ID:55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645 Status:running}
	I0919 22:38:27.767378   95759 cri.go:135] skipping {55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645 running}: state = "running", want "paused"
	I0919 22:38:27.767384   95759 cri.go:129] container: {ID:79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9 Status:running}
	I0919 22:38:27.767393   95759 cri.go:135] skipping {79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9 running}: state = "running", want "paused"
	I0919 22:38:27.767399   95759 cri.go:129] container: {ID:935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba Status:running}
	I0919 22:38:27.767405   95759 cri.go:135] skipping {935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba running}: state = "running", want "paused"
	I0919 22:38:27.767454   95759 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:38:27.777467   95759 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:38:27.777485   95759 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:38:27.777529   95759 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:38:27.786748   95759 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:38:27.787254   95759 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-984158" does not appear in /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:27.787385   95759 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14668/kubeconfig needs updating (will repair): [kubeconfig missing "ha-984158" cluster setting kubeconfig missing "ha-984158" context setting]
	I0919 22:38:27.787739   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.788395   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:38:27.788915   95759 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:38:27.788933   95759 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:38:27.788940   95759 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:38:27.788945   95759 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:38:27.788950   95759 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:38:27.788983   95759 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:38:27.789419   95759 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:38:27.799384   95759 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:38:27.799408   95759 kubeadm.go:593] duration metric: took 21.916898ms to restartPrimaryControlPlane
	I0919 22:38:27.799419   95759 kubeadm.go:394] duration metric: took 94.114072ms to StartCluster
	I0919 22:38:27.799438   95759 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.799508   95759 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:38:27.800283   95759 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:38:27.800531   95759 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:38:27.800560   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:38:27.800569   95759 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:38:27.800796   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:27.803656   95759 out.go:179] * Enabled addons: 
	I0919 22:38:27.804977   95759 addons.go:514] duration metric: took 4.403593ms for enable addons: enabled=[]
	I0919 22:38:27.805014   95759 start.go:246] waiting for cluster config update ...
	I0919 22:38:27.805026   95759 start.go:255] writing updated cluster config ...
	I0919 22:38:27.806661   95759 out.go:203] 
	I0919 22:38:27.808147   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:27.808240   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:27.809900   95759 out.go:179] * Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	I0919 22:38:27.811058   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:38:27.812367   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:38:27.813643   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:38:27.813670   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:38:27.813747   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:38:27.813763   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:38:27.813745   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:38:27.813880   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:27.838519   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:38:27.838542   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:38:27.838565   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:38:27.838595   95759 start.go:360] acquireMachinesLock for ha-984158-m02: {Name:mk33ccd18791cf0a87d18f7af68677fa10224c04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:38:27.838659   95759 start.go:364] duration metric: took 44.758µs to acquireMachinesLock for "ha-984158-m02"
	I0919 22:38:27.838683   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:38:27.838692   95759 fix.go:54] fixHost starting: m02
	I0919 22:38:27.838992   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:38:27.861121   95759 fix.go:112] recreateIfNeeded on ha-984158-m02: state=Stopped err=<nil>
	W0919 22:38:27.861152   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:38:27.863184   95759 out.go:252] * Restarting existing docker container for "ha-984158-m02" ...
	I0919 22:38:27.863257   95759 cli_runner.go:164] Run: docker start ha-984158-m02
	I0919 22:38:28.125822   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:38:28.146346   95759 kic.go:430] container "ha-984158-m02" state is running.
	I0919 22:38:28.146733   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:28.168173   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:38:28.168475   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:38:28.168559   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:28.189073   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:28.189415   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:28.189432   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:38:28.190241   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:45924->127.0.0.1:32818: read: connection reset by peer
	I0919 22:38:31.326317   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:38:31.326343   95759 ubuntu.go:182] provisioning hostname "ha-984158-m02"
	I0919 22:38:31.326396   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.346064   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:31.346303   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:31.346317   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m02 && echo "ha-984158-m02" | sudo tee /etc/hostname
	I0919 22:38:31.495830   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:38:31.495906   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.515009   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:31.515247   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:31.515266   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:38:31.654008   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:38:31.654036   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:38:31.654057   95759 ubuntu.go:190] setting up certificates
	I0919 22:38:31.654067   95759 provision.go:84] configureAuth start
	I0919 22:38:31.654148   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:31.672869   95759 provision.go:143] copyHostCerts
	I0919 22:38:31.672912   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:31.672970   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:38:31.672984   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:38:31.673073   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:38:31.673199   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:31.673230   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:38:31.673241   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:38:31.673286   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:38:31.673375   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:31.673403   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:38:31.673410   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:38:31.673450   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:38:31.673525   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m02 san=[127.0.0.1 192.168.49.3 ha-984158-m02 localhost minikube]
	I0919 22:38:31.832848   95759 provision.go:177] copyRemoteCerts
	I0919 22:38:31.832920   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:38:31.832966   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:31.850721   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:31.949325   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:38:31.949391   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:38:31.976597   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:38:31.976650   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:38:32.002584   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:38:32.002653   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:38:32.035331   95759 provision.go:87] duration metric: took 381.249624ms to configureAuth
	I0919 22:38:32.035366   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:38:32.035610   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:38:32.035718   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.058439   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:38:32.058702   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32818 <nil> <nil>}
	I0919 22:38:32.058739   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:38:32.484521   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:38:32.484550   95759 machine.go:96] duration metric: took 4.316059426s to provisionDockerMachine
	I0919 22:38:32.484563   95759 start.go:293] postStartSetup for "ha-984158-m02" (driver="docker")
	I0919 22:38:32.484576   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:38:32.484635   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:38:32.484697   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.510926   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.619996   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:38:32.629566   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:38:32.629676   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:38:32.629727   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:38:32.629764   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:38:32.629806   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:38:32.629922   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:38:32.630086   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:38:32.630147   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:38:32.630353   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:38:32.645004   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:38:32.675202   95759 start.go:296] duration metric: took 190.622889ms for postStartSetup
	I0919 22:38:32.675288   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:38:32.675327   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.697580   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.795763   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:38:32.801249   95759 fix.go:56] duration metric: took 4.962547133s for fixHost
	I0919 22:38:32.801275   95759 start.go:83] releasing machines lock for "ha-984158-m02", held for 4.962602853s
	I0919 22:38:32.801364   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:38:32.827878   95759 out.go:179] * Found network options:
	I0919 22:38:32.829587   95759 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:38:32.830969   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:38:32.831030   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:38:32.831146   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:38:32.831196   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.831204   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:38:32.831253   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:38:32.853448   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:32.853718   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32818 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:38:33.150612   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:38:33.160301   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:33.176730   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:38:33.176815   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:38:33.191328   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:38:33.191364   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:38:33.191416   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:38:33.191485   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:38:33.213815   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:38:33.231542   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:38:33.231635   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:38:33.247095   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:38:33.260329   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:38:33.380840   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:38:33.498308   95759 docker.go:234] disabling docker service ...
	I0919 22:38:33.498382   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:38:33.517853   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:38:33.536133   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:38:33.652463   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:38:33.761899   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:38:33.774677   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:38:33.793915   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:38:33.793969   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.804996   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:38:33.805057   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.816056   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.827802   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.840124   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:38:33.850301   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.861287   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.871826   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:38:33.883496   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:38:33.893950   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:38:33.906440   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:38:34.043971   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:39:04.326209   95759 ssh_runner.go:235] Completed: sudo systemctl restart crio: (30.282202499s)
	I0919 22:39:04.326243   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:39:04.326297   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:39:04.330226   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:39:04.330288   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:39:04.334075   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:39:04.369702   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:39:04.369800   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:04.406718   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:04.445793   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:39:04.446931   95759 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:39:04.448076   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:39:04.466313   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:39:04.470940   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:04.487515   95759 mustload.go:65] Loading cluster: ha-984158
	I0919 22:39:04.487734   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:04.487986   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:39:04.509829   95759 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:39:04.510158   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.3
	I0919 22:39:04.510174   95759 certs.go:194] generating shared ca certs ...
	I0919 22:39:04.510188   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:39:04.510345   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:39:04.510395   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:39:04.510409   95759 certs.go:256] generating profile certs ...
	I0919 22:39:04.510508   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:39:04.510584   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.84abfbbb
	I0919 22:39:04.510636   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:39:04.510651   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:39:04.510678   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:39:04.510696   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:39:04.510717   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:39:04.510733   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:39:04.510752   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:39:04.510781   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:39:04.510806   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:39:04.510875   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:39:04.510915   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:39:04.510928   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:39:04.510960   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:39:04.510988   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:39:04.511020   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:39:04.511077   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:04.511136   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:04.511156   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:39:04.511176   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:39:04.511229   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:39:04.532173   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:39:04.620518   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:39:04.624965   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:39:04.638633   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:39:04.642459   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:39:04.656462   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:39:04.660491   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:39:04.673947   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:39:04.678496   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:39:04.694022   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:39:04.698129   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:39:04.711457   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:39:04.715160   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:39:04.729617   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:39:04.756565   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:39:04.783062   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:39:04.808557   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:39:04.834684   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:39:04.860337   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:39:04.887473   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:39:04.913478   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:39:04.941337   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:39:04.967151   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:39:04.994669   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:39:05.028238   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:39:05.050978   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:39:05.073833   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:39:05.097285   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:39:05.120404   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:39:05.142847   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:39:05.163160   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:39:05.184053   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:39:05.190286   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:39:05.200925   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.204978   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.205054   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:05.211914   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:39:05.222874   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:39:05.234900   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.238900   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.238947   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:39:05.246276   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:39:05.255894   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:39:05.266269   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.270313   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.270382   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:39:05.278196   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:39:05.287746   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:39:05.291476   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:39:05.298503   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:39:05.305486   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:39:05.312720   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:39:05.319784   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:39:05.327527   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:39:05.334693   95759 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0919 22:39:05.334792   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:39:05.334818   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:39:05.334851   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:39:05.347510   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:05.347572   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:39:05.347618   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:39:05.356984   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:39:05.357056   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:39:05.367597   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:39:05.387861   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:39:05.406815   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:39:05.427878   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:39:05.432487   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:05.444804   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:05.548051   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:05.560978   95759 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:39:05.561299   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:05.563075   95759 out.go:179] * Verifying Kubernetes components...
	I0919 22:39:05.564716   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:05.672434   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:05.689063   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:39:05.689191   95759 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:39:05.689392   95759 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m02" to be "Ready" ...
	I0919 22:39:05.698088   95759 node_ready.go:49] node "ha-984158-m02" is "Ready"
	I0919 22:39:05.698164   95759 node_ready.go:38] duration metric: took 8.753764ms for node "ha-984158-m02" to be "Ready" ...
	I0919 22:39:05.698182   95759 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:39:05.698299   95759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:05.711300   95759 api_server.go:72] duration metric: took 150.274321ms to wait for apiserver process to appear ...
	I0919 22:39:05.711326   95759 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:39:05.711345   95759 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:39:05.716499   95759 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:39:05.717555   95759 api_server.go:141] control plane version: v1.34.0
	I0919 22:39:05.717586   95759 api_server.go:131] duration metric: took 6.25291ms to wait for apiserver health ...
	I0919 22:39:05.717595   95759 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:39:05.724069   95759 system_pods.go:59] 24 kube-system pods found
	I0919 22:39:05.724156   95759 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.724172   95759 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.724180   95759 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:05.724186   95759 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:05.724191   95759 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:39:05.724196   95759 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:05.724201   95759 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:05.724210   95759 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:39:05.724219   95759 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:05.724226   95759 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:05.724233   95759 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:39:05.724241   95759 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:05.724248   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:05.724256   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:39:05.724262   95759 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:05.724268   95759 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:05.724277   95759 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:39:05.724285   95759 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:05.724293   95759 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:05.724298   95759 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:39:05.724303   95759 system_pods.go:61] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:05.724308   95759 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:05.724317   95759 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:05.724325   95759 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:39:05.724338   95759 system_pods.go:74] duration metric: took 6.735402ms to wait for pod list to return data ...
	I0919 22:39:05.724355   95759 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:39:05.728216   95759 default_sa.go:45] found service account: "default"
	I0919 22:39:05.728243   95759 default_sa.go:55] duration metric: took 3.879783ms for default service account to be created ...
	I0919 22:39:05.728256   95759 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:39:05.733903   95759 system_pods.go:86] 24 kube-system pods found
	I0919 22:39:05.733937   95759 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.733945   95759 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:05.733951   95759 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:05.733954   95759 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:05.733958   95759 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:39:05.733961   95759 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:05.733964   95759 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:05.733969   95759 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0919 22:39:05.733973   95759 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:05.733976   95759 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:05.733979   95759 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:39:05.733982   95759 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:05.733986   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:05.733990   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:39:05.733993   95759 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:05.733995   95759 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:05.733999   95759 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0919 22:39:05.734007   95759 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:05.734010   95759 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:05.734013   95759 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:39:05.734016   95759 system_pods.go:89] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:05.734019   95759 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:05.734022   95759 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:05.734025   95759 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 22:39:05.734035   95759 system_pods.go:126] duration metric: took 5.77298ms to wait for k8s-apps to be running ...
	I0919 22:39:05.734044   95759 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:39:05.734085   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:05.746589   95759 system_svc.go:56] duration metric: took 12.533548ms WaitForService to wait for kubelet
	I0919 22:39:05.746629   95759 kubeadm.go:578] duration metric: took 185.605298ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:39:05.746655   95759 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:39:05.750196   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750221   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750233   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750236   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750240   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:05.750242   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:05.750246   95759 node_conditions.go:105] duration metric: took 3.586256ms to run NodePressure ...
	I0919 22:39:05.750259   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:39:05.750286   95759 start.go:255] writing updated cluster config ...
	I0919 22:39:05.752610   95759 out.go:203] 
	I0919 22:39:05.754285   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:05.754392   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:05.756186   95759 out.go:179] * Starting "ha-984158-m03" control-plane node in "ha-984158" cluster
	I0919 22:39:05.757628   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:39:05.758862   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:39:05.760172   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:39:05.760197   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:39:05.760252   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:39:05.760314   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:39:05.760332   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:39:05.760441   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:05.782434   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:39:05.782456   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:39:05.782471   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:39:05.782504   95759 start.go:360] acquireMachinesLock for ha-984158-m03: {Name:mkf33267bff56ae1cde0b805408b7f6393558146 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:05.782575   95759 start.go:364] duration metric: took 49.512µs to acquireMachinesLock for "ha-984158-m03"
	I0919 22:39:05.782600   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:05.782610   95759 fix.go:54] fixHost starting: m03
	I0919 22:39:05.782826   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:39:05.800849   95759 fix.go:112] recreateIfNeeded on ha-984158-m03: state=Stopped err=<nil>
	W0919 22:39:05.800880   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:05.803272   95759 out.go:252] * Restarting existing docker container for "ha-984158-m03" ...
	I0919 22:39:05.803361   95759 cli_runner.go:164] Run: docker start ha-984158-m03
	I0919 22:39:06.059506   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m03 --format={{.State.Status}}
	I0919 22:39:06.078641   95759 kic.go:430] container "ha-984158-m03" state is running.
	I0919 22:39:06.079004   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:06.099001   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:06.099262   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:06.099315   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:06.117915   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:06.118166   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:06.118181   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:06.118862   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49366->127.0.0.1:32823: read: connection reset by peer
	I0919 22:39:09.258735   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:39:09.258764   95759 ubuntu.go:182] provisioning hostname "ha-984158-m03"
	I0919 22:39:09.258824   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.277807   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:09.278027   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:09.278041   95759 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m03 && echo "ha-984158-m03" | sudo tee /etc/hostname
	I0919 22:39:09.428956   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m03
	
	I0919 22:39:09.429040   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.447284   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:09.447535   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:09.447560   95759 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:39:09.593539   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:39:09.593573   95759 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:39:09.593598   95759 ubuntu.go:190] setting up certificates
	I0919 22:39:09.593609   95759 provision.go:84] configureAuth start
	I0919 22:39:09.593674   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:09.617495   95759 provision.go:143] copyHostCerts
	I0919 22:39:09.617537   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:39:09.617594   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:39:09.617607   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:39:09.617684   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:39:09.617811   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:39:09.617846   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:39:09.617853   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:39:09.618482   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:39:09.618632   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:39:09.618662   95759 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:39:09.618671   95759 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:39:09.618706   95759 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:39:09.618780   95759 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m03 san=[127.0.0.1 192.168.49.4 ha-984158-m03 localhost minikube]
	I0919 22:39:09.838307   95759 provision.go:177] copyRemoteCerts
	I0919 22:39:09.838429   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:39:09.838478   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:09.863933   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:09.983312   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:39:09.983424   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:39:10.021925   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:39:10.022008   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:39:10.063154   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:39:10.063276   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:39:10.104760   95759 provision.go:87] duration metric: took 511.137266ms to configureAuth
	I0919 22:39:10.104795   95759 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:39:10.105072   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:10.105290   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.130112   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:10.130385   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32823 <nil> <nil>}
	I0919 22:39:10.130414   95759 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:39:10.533816   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:39:10.533844   95759 machine.go:96] duration metric: took 4.434568252s to provisionDockerMachine
	I0919 22:39:10.533858   95759 start.go:293] postStartSetup for "ha-984158-m03" (driver="docker")
	I0919 22:39:10.533871   95759 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:39:10.533932   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:39:10.533966   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.553604   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.653755   95759 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:39:10.657424   95759 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:39:10.657456   95759 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:39:10.657463   95759 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:39:10.657469   95759 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:39:10.657479   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:39:10.657531   95759 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:39:10.657598   95759 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:39:10.657608   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:39:10.657691   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:39:10.667261   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:10.700579   95759 start.go:296] duration metric: took 166.704996ms for postStartSetup
	I0919 22:39:10.700662   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:39:10.700704   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.728418   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.830886   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:39:10.836158   95759 fix.go:56] duration metric: took 5.053541909s for fixHost
	I0919 22:39:10.836186   95759 start.go:83] releasing machines lock for "ha-984158-m03", held for 5.053597855s
	I0919 22:39:10.836256   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m03
	I0919 22:39:10.859049   95759 out.go:179] * Found network options:
	I0919 22:39:10.860801   95759 out.go:179]   - NO_PROXY=192.168.49.2,192.168.49.3
	W0919 22:39:10.862070   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862112   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862141   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:39:10.862155   95759 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:39:10.862232   95759 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:39:10.862282   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.862297   95759 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:39:10.862360   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m03
	I0919 22:39:10.885568   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:10.886944   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32823 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m03/id_rsa Username:docker}
	I0919 22:39:11.122339   95759 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:39:11.127789   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:39:11.138248   95759 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:39:11.138341   95759 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:39:11.147671   95759 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:39:11.147698   95759 start.go:495] detecting cgroup driver to use...
	I0919 22:39:11.147735   95759 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:39:11.147774   95759 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:39:11.160936   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:39:11.174826   95759 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:39:11.174888   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:39:11.190348   95759 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:39:11.203116   95759 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:39:11.321919   95759 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:39:11.432545   95759 docker.go:234] disabling docker service ...
	I0919 22:39:11.432608   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:39:11.446263   95759 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:39:11.458056   95759 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:39:11.572334   95759 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:39:11.685921   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:39:11.698336   95759 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:39:11.718031   95759 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:39:11.718164   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.731929   95759 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:39:11.732016   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.743385   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.755175   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.766807   95759 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:39:11.779733   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.791806   95759 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.802833   95759 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:39:11.813877   95759 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:39:11.824761   95759 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:39:11.835392   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:11.940776   95759 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:39:12.206168   95759 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:39:12.206252   95759 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:39:12.210177   95759 start.go:563] Will wait 60s for crictl version
	I0919 22:39:12.210235   95759 ssh_runner.go:195] Run: which crictl
	I0919 22:39:12.213924   95759 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:39:12.250824   95759 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:39:12.250899   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:12.288367   95759 ssh_runner.go:195] Run: crio --version
	I0919 22:39:12.331200   95759 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:39:12.332776   95759 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:39:12.334399   95759 out.go:179]   - env NO_PROXY=192.168.49.2,192.168.49.3
	I0919 22:39:12.335764   95759 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:39:12.353568   95759 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:39:12.357576   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:12.370671   95759 mustload.go:65] Loading cluster: ha-984158
	I0919 22:39:12.370930   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:12.371317   95759 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:39:12.389760   95759 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:39:12.390003   95759 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.4
	I0919 22:39:12.390016   95759 certs.go:194] generating shared ca certs ...
	I0919 22:39:12.390030   95759 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:39:12.390204   95759 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:39:12.390274   95759 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:39:12.390289   95759 certs.go:256] generating profile certs ...
	I0919 22:39:12.390403   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:39:12.390484   95759 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.2fccefa7
	I0919 22:39:12.390533   95759 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:39:12.390549   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:39:12.390568   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:39:12.390585   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:39:12.390601   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:39:12.390614   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:39:12.390628   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:39:12.390641   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:39:12.390653   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:39:12.390711   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:39:12.390749   95759 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:39:12.390761   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:39:12.390789   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:39:12.390812   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:39:12.390832   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:39:12.390871   95759 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:39:12.390895   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:12.390910   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:39:12.390923   95759 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:39:12.390971   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:39:12.408363   95759 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32813 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:39:12.497500   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:39:12.501626   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:39:12.514736   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:39:12.518842   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:39:12.534226   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:39:12.538486   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:39:12.551906   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:39:12.555555   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:39:12.568778   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:39:12.573237   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:39:12.587524   95759 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:39:12.591646   95759 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:39:12.605021   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:39:12.632905   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:39:12.658562   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:39:12.685222   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:39:12.710986   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 22:39:12.742821   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:39:12.774649   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:39:12.808068   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:39:12.840999   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:39:12.873033   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:39:12.904176   95759 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:39:12.935469   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:39:12.958451   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:39:12.983716   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:39:13.006372   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:39:13.026634   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:39:13.048003   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:39:13.067093   95759 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:39:13.091242   95759 ssh_runner.go:195] Run: openssl version
	I0919 22:39:13.097309   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:39:13.107657   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.111389   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.111438   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:39:13.118417   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:39:13.129698   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:39:13.140452   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.144194   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.144245   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:39:13.151266   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:39:13.161188   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:39:13.171891   95759 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.176332   95759 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.176413   95759 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:39:13.184138   95759 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:39:13.193625   95759 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:39:13.197577   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:39:13.204628   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:39:13.211553   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:39:13.218449   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:39:13.225712   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:39:13.232770   95759 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:39:13.239778   95759 kubeadm.go:926] updating node {m03 192.168.49.4 8443 v1.34.0 crio true true} ...
	I0919 22:39:13.239885   95759 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:39:13.239907   95759 kube-vip.go:115] generating kube-vip config ...
	I0919 22:39:13.239943   95759 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:39:13.252386   95759 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:39:13.252462   95759 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:39:13.252520   95759 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:39:13.261653   95759 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:39:13.261771   95759 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:39:13.271379   95759 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:39:13.292763   95759 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:39:13.314362   95759 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:39:13.334791   95759 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:39:13.338371   95759 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:39:13.350977   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:13.456433   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:13.469559   95759 start.go:235] Will wait 6m0s for node &{Name:m03 IP:192.168.49.4 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:39:13.469884   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:13.472456   95759 out.go:179] * Verifying Kubernetes components...
	I0919 22:39:13.474707   95759 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:39:13.588742   95759 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:39:13.602600   95759 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:39:13.602666   95759 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:39:13.602869   95759 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m03" to be "Ready" ...
	I0919 22:39:13.605956   95759 node_ready.go:49] node "ha-984158-m03" is "Ready"
	I0919 22:39:13.605979   95759 node_ready.go:38] duration metric: took 3.097172ms for node "ha-984158-m03" to be "Ready" ...
	I0919 22:39:13.605993   95759 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:39:13.606032   95759 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:39:13.618211   95759 api_server.go:72] duration metric: took 148.610181ms to wait for apiserver process to appear ...
	I0919 22:39:13.618235   95759 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:39:13.618251   95759 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:39:13.622760   95759 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:39:13.623811   95759 api_server.go:141] control plane version: v1.34.0
	I0919 22:39:13.623838   95759 api_server.go:131] duration metric: took 5.597306ms to wait for apiserver health ...
	I0919 22:39:13.623847   95759 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:39:13.632153   95759 system_pods.go:59] 24 kube-system pods found
	I0919 22:39:13.632182   95759 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:39:13.632190   95759 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:13.632196   95759 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:13.632200   95759 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:13.632207   95759 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:39:13.632210   95759 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:13.632214   95759 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:13.632216   95759 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:39:13.632219   95759 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:13.632229   95759 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:13.632233   95759 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:39:13.632237   95759 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:13.632241   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:13.632247   95759 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:39:13.632253   95759 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:13.632256   95759 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:13.632259   95759 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:39:13.632261   95759 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:13.632264   95759 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:13.632274   95759 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:39:13.632277   95759 system_pods.go:61] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:13.632282   95759 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:13.632285   95759 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:13.632288   95759 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:39:13.632295   95759 system_pods.go:74] duration metric: took 8.442512ms to wait for pod list to return data ...
	I0919 22:39:13.632305   95759 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:39:13.635316   95759 default_sa.go:45] found service account: "default"
	I0919 22:39:13.635337   95759 default_sa.go:55] duration metric: took 3.026488ms for default service account to be created ...
	I0919 22:39:13.635346   95759 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:39:13.733862   95759 system_pods.go:86] 24 kube-system pods found
	I0919 22:39:13.733908   95759 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running
	I0919 22:39:13.733922   95759 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:39:13.733929   95759 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running
	I0919 22:39:13.733937   95759 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running
	I0919 22:39:13.733945   95759 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:39:13.733952   95759 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:39:13.733958   95759 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:39:13.733964   95759 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:39:13.733969   95759 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running
	I0919 22:39:13.733974   95759 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running
	I0919 22:39:13.733985   95759 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:39:13.733995   95759 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running
	I0919 22:39:13.734001   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running
	I0919 22:39:13.734013   95759 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:39:13.734018   95759 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:39:13.734021   95759 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:39:13.734024   95759 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:39:13.734027   95759 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running
	I0919 22:39:13.734033   95759 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running
	I0919 22:39:13.734044   95759 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:39:13.734052   95759 system_pods.go:89] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:39:13.734057   95759 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:39:13.734065   95759 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:39:13.734069   95759 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:39:13.734079   95759 system_pods.go:126] duration metric: took 98.726691ms to wait for k8s-apps to be running ...
	I0919 22:39:13.734091   95759 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:39:13.734175   95759 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:39:13.747528   95759 system_svc.go:56] duration metric: took 13.410723ms WaitForService to wait for kubelet
	I0919 22:39:13.747570   95759 kubeadm.go:578] duration metric: took 277.970313ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:39:13.747595   95759 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:39:13.751576   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751598   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751610   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751613   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751616   95759 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:39:13.751619   95759 node_conditions.go:123] node cpu capacity is 8
	I0919 22:39:13.751622   95759 node_conditions.go:105] duration metric: took 4.023347ms to run NodePressure ...
	I0919 22:39:13.751634   95759 start.go:241] waiting for startup goroutines ...
	I0919 22:39:13.751651   95759 start.go:255] writing updated cluster config ...
	I0919 22:39:13.753417   95759 out.go:203] 
	I0919 22:39:13.755135   95759 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:39:13.755254   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:13.757081   95759 out.go:179] * Starting "ha-984158-m04" worker node in "ha-984158" cluster
	I0919 22:39:13.758394   95759 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:39:13.759816   95759 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:39:13.761015   95759 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:39:13.761039   95759 cache.go:58] Caching tarball of preloaded images
	I0919 22:39:13.761051   95759 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:39:13.761261   95759 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:39:13.761304   95759 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:39:13.761429   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:13.782360   95759 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:39:13.782385   95759 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:39:13.782406   95759 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:39:13.782436   95759 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:39:13.782501   95759 start.go:364] duration metric: took 44.732µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:39:13.782524   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:39:13.782534   95759 fix.go:54] fixHost starting: m04
	I0919 22:39:13.782740   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:39:13.801027   95759 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Stopped err=<nil>
	W0919 22:39:13.801060   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:39:13.802864   95759 out.go:252] * Restarting existing docker container for "ha-984158-m04" ...
	I0919 22:39:13.802931   95759 cli_runner.go:164] Run: docker start ha-984158-m04
	I0919 22:39:14.055762   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:39:14.074848   95759 kic.go:430] container "ha-984158-m04" state is running.
	I0919 22:39:14.075262   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:39:14.094352   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:39:14.094594   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:39:14.094647   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:39:14.114064   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:39:14.114317   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32828 <nil> <nil>}
	I0919 22:39:14.114330   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:39:14.114961   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50476->127.0.0.1:32828: read: connection reset by peer
	I0919 22:39:17.116460   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:20.118409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:23.120443   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:26.120776   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:29.121743   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:32.123258   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:35.125391   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:38.125915   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:41.126437   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:44.127525   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:47.128400   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:50.130402   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:53.132094   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:56.132448   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:39:59.133362   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:02.134004   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:05.136365   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:08.136767   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:11.137236   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:14.138295   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:17.139769   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:20.141642   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:23.143546   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:26.143966   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:29.144829   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:32.146423   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:35.148801   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:38.150005   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:41.150409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:44.150842   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:47.152406   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:50.154676   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:53.156471   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:56.157387   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:40:59.158366   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:02.160382   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:05.162387   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:08.162900   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:11.163385   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:14.164700   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:17.165484   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:20.167366   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:23.169809   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:26.170437   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:29.171409   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:32.173443   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:35.175650   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:38.176984   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:41.177465   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:44.179757   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:47.181386   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:50.183757   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:53.185945   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:56.186445   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:41:59.187353   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:02.189451   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:05.191306   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:08.191935   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:11.192418   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32828: connect: connection refused
	I0919 22:42:14.194206   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:42:14.194236   95759 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 22:42:14.194304   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.214461   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.214567   95759 machine.go:96] duration metric: took 3m0.119960942s to provisionDockerMachine
	I0919 22:42:14.214652   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:42:14.214684   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.238129   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.238280   95759 retry.go:31] will retry after 248.39527ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:14.487752   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.507066   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.507179   95759 retry.go:31] will retry after 241.490952ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:14.749696   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:14.769271   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:14.769394   95759 retry.go:31] will retry after 573.29064ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.342939   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.361305   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:15.361440   95759 retry.go:31] will retry after 493.546865ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.855177   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.876393   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:42:15.876503   95759 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:15.876520   95759 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:15.876565   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:42:15.876594   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:15.896632   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:15.896744   95759 retry.go:31] will retry after 211.367435ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.109288   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:16.130175   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:16.130270   95759 retry.go:31] will retry after 289.868834ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.420891   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:16.442472   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:42:16.442604   95759 retry.go:31] will retry after 547.590918ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:16.990359   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:42:17.008923   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:42:17.009049   95759 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:17.009064   95759 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:17.009073   95759 fix.go:56] duration metric: took 3m3.226540631s for fixHost
	I0919 22:42:17.009081   95759 start.go:83] releasing machines lock for "ha-984158-m04", held for 3m3.226570319s
	W0919 22:42:17.009092   95759 start.go:714] error starting host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:42:17.009191   95759 out.go:285] ! StartHost failed, but will try again: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:42:17.009203   95759 start.go:729] Will try again in 5 seconds ...
	I0919 22:42:22.010253   95759 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:42:22.010363   95759 start.go:364] duration metric: took 70.627µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:42:22.010395   95759 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:42:22.010406   95759 fix.go:54] fixHost starting: m04
	I0919 22:42:22.010649   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:42:22.029262   95759 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Stopped err=<nil>
	W0919 22:42:22.029294   95759 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:42:22.031096   95759 out.go:252] * Restarting existing docker container for "ha-984158-m04" ...
	I0919 22:42:22.031220   95759 cli_runner.go:164] Run: docker start ha-984158-m04
	I0919 22:42:22.294621   95759 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:42:22.313475   95759 kic.go:430] container "ha-984158-m04" state is running.
	I0919 22:42:22.313799   95759 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:42:22.333284   95759 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:42:22.333514   95759 machine.go:93] provisionDockerMachine start ...
	I0919 22:42:22.333568   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:42:22.353907   95759 main.go:141] libmachine: Using SSH client type: native
	I0919 22:42:22.354187   95759 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32833 <nil> <nil>}
	I0919 22:42:22.354204   95759 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:42:22.354888   95759 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51412->127.0.0.1:32833: read: connection reset by peer
	I0919 22:42:25.355457   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:28.356034   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:31.356407   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:34.358370   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:37.359693   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:40.360614   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:43.362397   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:46.363784   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:49.364408   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:52.366596   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:55.367888   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:42:58.369219   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:01.370395   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:04.371156   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:07.372724   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:10.373695   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:13.374908   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:16.375383   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:19.376388   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:22.378537   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:25.379508   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:28.380693   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:31.381372   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:34.383699   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:37.384935   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:40.385685   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:43.388048   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:46.388445   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:49.389657   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:52.391627   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:55.392687   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:43:58.393125   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:01.393619   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:04.395945   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:07.398372   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:10.398608   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:13.400912   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:16.401401   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:19.402479   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:22.404415   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:25.405562   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:28.406498   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:31.407755   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:34.410076   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:37.412454   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:40.413768   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:43.415168   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:46.416416   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:49.417399   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:52.419643   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:55.420363   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:44:58.420738   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:01.421609   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:04.423913   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:07.425430   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:10.426778   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:13.428381   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:16.429193   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:19.430490   95759 main.go:141] libmachine: Error dialing TCP: dial tcp 127.0.0.1:32833: connect: connection refused
	I0919 22:45:22.432491   95759 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:45:22.432543   95759 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 22:45:22.432609   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.452712   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.452777   95759 machine.go:96] duration metric: took 3m0.119250879s to provisionDockerMachine
	I0919 22:45:22.452858   95759 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:45:22.452892   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.472911   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.473047   95759 retry.go:31] will retry after 202.283506ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:22.676548   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:22.694834   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:22.694965   95759 retry.go:31] will retry after 463.907197ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.159340   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.178560   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.178658   95759 retry.go:31] will retry after 365.232594ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.544210   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.564214   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:45:23.564366   95759 start.go:268] error running df -h /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:23.564390   95759 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.564449   95759 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:45:23.564494   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.583703   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.583796   95759 retry.go:31] will retry after 343.872214ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:23.928329   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:23.946762   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:23.946864   95759 retry.go:31] will retry after 341.564773ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.289296   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:24.312255   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	I0919 22:45:24.312369   95759 retry.go:31] will retry after 341.728488ms: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.655044   95759 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	W0919 22:45:24.674698   95759 cli_runner.go:211] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04 returned with exit code 1
	W0919 22:45:24.674839   95759 start.go:283] error running df -BG /var: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:24.674858   95759 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.674871   95759 fix.go:56] duration metric: took 3m2.664466794s for fixHost
	I0919 22:45:24.674881   95759 start.go:83] releasing machines lock for "ha-984158-m04", held for 3m2.664502957s
	W0919 22:45:24.674982   95759 out.go:285] * Failed to start docker container. Running "minikube delete -p ha-984158" may fix it: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	I0919 22:45:24.677468   95759 out.go:203] 
	W0919 22:45:24.678601   95759 out.go:285] X Exiting due to GUEST_START: failed to start node: adding node: Failed to start host: provision: get ssh host-port: unable to inspect a not running container to get SSH port
	W0919 22:45:24.678620   95759 out.go:285] * 
	W0919 22:45:24.680349   95759 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0919 22:45:24.681822   95759 out.go:203] 
	
	
	==> CRI-O <==
	Sep 19 22:38:34 ha-984158 crio[565]: time="2025-09-19 22:38:34.490551103Z" level=info msg="Starting container: b2cb38a999cac4269513a263840936a7f0a5f1ef129b45bd9f71e4b65f4c4a74" id=6d013997-4bc0-47b8-a2e4-8ad50a27feae name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:38:34 ha-984158 crio[565]: time="2025-09-19 22:38:34.498969531Z" level=info msg="Started container" PID=1368 containerID=b2cb38a999cac4269513a263840936a7f0a5f1ef129b45bd9f71e4b65f4c4a74 description=kube-system/coredns-66bc5c9577-ltjmz/coredns id=6d013997-4bc0-47b8-a2e4-8ad50a27feae name=/runtime.v1.RuntimeService/StartContainer sandboxID=815752732ad74ae8e5961e3c79b9a821b4903503b20978d661c98a6a36ef4b9d
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.902522587Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.906977791Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.907009772Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.907037293Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.911428136Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.911466965Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.911486751Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.915460017Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.915497091Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.915525773Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.919544523Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:38:44 ha-984158 crio[565]: time="2025-09-19 22:38:44.919575130Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.012886161Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=d4c92288-6a5e-4f04-96fc-76b8e890177a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.013169907Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=d4c92288-6a5e-4f04-96fc-76b8e890177a name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.013901636Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=be2f997d-9458-49d8-bca1-fcc18c2e9b9f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.014168511Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=be2f997d-9458-49d8-bca1-fcc18c2e9b9f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.018353225Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=00fb4cd8-8bf1-4b30-8398-7f8f2949db03 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.018511963Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.036610475Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/8919f8bf0a44a05938e764851b8252bfdd952ff2d6aefa1882e35c8a0555438f/merged/etc/passwd: no such file or directory"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.036659847Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/8919f8bf0a44a05938e764851b8252bfdd952ff2d6aefa1882e35c8a0555438f/merged/etc/group: no such file or directory"
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.095888981Z" level=info msg="Created container f73602ecef49bd46313a999f2137eea9370c3511211c3961b8b8c90352ad183f: kube-system/storage-provisioner/storage-provisioner" id=00fb4cd8-8bf1-4b30-8398-7f8f2949db03 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.096561974Z" level=info msg="Starting container: f73602ecef49bd46313a999f2137eea9370c3511211c3961b8b8c90352ad183f" id=4af7ccf6-09cd-4a8b-a8a3-ab196defe346 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:39:05 ha-984158 crio[565]: time="2025-09-19 22:39:05.104038077Z" level=info msg="Started container" PID=1741 containerID=f73602ecef49bd46313a999f2137eea9370c3511211c3961b8b8c90352ad183f description=kube-system/storage-provisioner/storage-provisioner id=4af7ccf6-09cd-4a8b-a8a3-ab196defe346 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c833b8c10762b8d7272f8c569836ab444d6d5b309d15da090c6b1664db70ed7c
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f73602ecef49b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   6 minutes ago       Running             storage-provisioner       3                   c833b8c10762b       storage-provisioner
	b2cb38a999cac       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   7 minutes ago       Running             coredns                   1                   815752732ad74       coredns-66bc5c9577-ltjmz
	676fc8265fa71       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   7 minutes ago       Running             busybox                   1                   853e9db2bdfa8       busybox-7b57f96db7-rnjl7
	7e1e5941c1568       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   7 minutes ago       Running             kindnet-cni               1                   547d271717250       kindnet-rd882
	c9027fdf07d43       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   7 minutes ago       Exited              storage-provisioner       2                   c833b8c10762b       storage-provisioner
	a22f43664887c       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   7 minutes ago       Running             kube-proxy                1                   d51eb4228f1eb       kube-proxy-hdxxn
	377f1c9e1defe       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   7 minutes ago       Running             coredns                   1                   e756edadac294       coredns-66bc5c9577-5gnbx
	55f2dff5151a8       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   7 minutes ago       Running             kube-apiserver            1                   0d488246e5b37       kube-apiserver-ha-984158
	79c74b643f5a5       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   7 minutes ago       Running             kube-scheduler            1                   8f2d6202aa772       kube-scheduler-ha-984158
	32b11c5432de7       765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23   7 minutes ago       Running             kube-vip                  0                   01eeb16fe8f46       kube-vip-ha-984158
	935ae0c237d97       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   7 minutes ago       Running             kube-controller-manager   1                   8871adc8c9755       kube-controller-manager-ha-984158
	13b67e56860f8       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 minutes ago       Running             etcd                      1                   0fb5a565c96e5       etcd-ha-984158
	
	
	==> coredns [377f1c9e1defee6bb59c215f0a1a03ae29aa5b77855a39725abe9d88f4182f71] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47318 - 34366 "HINFO IN 8418387040146284568.7180250627065820856. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.092087824s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [b2cb38a999cac4269513a263840936a7f0a5f1ef129b45bd9f71e4b65f4c4a74] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:47142 - 36068 "HINFO IN 3054302858159562754.8459958995054926466. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.023807531s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               ha-984158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:33:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:45:41 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:42:07 +0000   Fri, 19 Sep 2025 22:33:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-984158
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 ce0d9390578a44a698c3fda69fb20273
	  System UUID:                e5418393-d7bf-429a-8ff0-9daee26920dd
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rnjl7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 coredns-66bc5c9577-5gnbx             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 coredns-66bc5c9577-ltjmz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-ha-984158                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-rd882                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-984158             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-984158    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-hdxxn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-984158             100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-vip-ha-984158                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m9s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  Starting                 7m7s                   kube-proxy       
	  Normal  NodeHasSufficientPID     12m (x8 over 12m)      kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  Starting                 12m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    12m (x8 over 12m)      kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  12m (x8 over 12m)      kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  11m                    kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     11m                    kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    11m                    kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           11m                    node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  NodeReady                11m                    kubelet          Node ha-984158 status is now: NodeReady
	  Normal  RegisteredNode           11m                    node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           9m13s                  node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  Starting                 7m16s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m15s (x8 over 7m16s)  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m15s (x8 over 7m16s)  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m15s (x8 over 7m16s)  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m6s                   node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           7m6s                   node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           6m30s                  node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	
	
	Name:               ha-984158-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 22:45:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:39:13 +0000   Fri, 19 Sep 2025 22:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-984158-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 82b431cbd7af4c3f980669ae3ee3bdc5
	  System UUID:                370c0cbf-a33c-464e-aad2-0ef3d76b4ebb
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8s7jn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 etcd-ha-984158-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-th979                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-ha-984158-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-controller-manager-ha-984158-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-plrn2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-ha-984158-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-vip-ha-984158-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m31s                  kube-proxy       
	  Normal  Starting                 11m                    kube-proxy       
	  Normal  RegisteredNode           11m                    node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           10m                    node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  NodeHasNoDiskPressure    9m18s (x8 over 9m18s)  kubelet          Node ha-984158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m18s (x8 over 9m18s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m18s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m18s (x8 over 9m18s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           9m13s                  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  Starting                 7m14s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m14s (x8 over 7m14s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m14s (x8 over 7m14s)  kubelet          Node ha-984158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m14s (x8 over 7m14s)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           7m6s                   node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           7m6s                   node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           6m30s                  node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	
	
	==> dmesg <==
	[  +0.103037] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029723] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.096733] kauditd_printk_skb: 47 callbacks suppressed
	[Sep19 22:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.041768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.022949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023825] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	
	
	==> etcd [13b67e56860f84e90d1e47cdb2dbe4fee5bad00728a0521dc7cfab0a80f9ad87] <==
	{"level":"info","ts":"2025-09-19T22:39:07.226913Z","caller":"rafthttp/stream.go:248","msg":"set message encoder","from":"aec36adc501070cc","to":"e8495135083f8257","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2025-09-19T22:39:07.226991Z","caller":"rafthttp/stream.go:273","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:39:07.240674Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:39:07.244098Z","caller":"rafthttp/stream.go:411","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:39:07.597341Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"e8495135083f8257","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-09-19T22:39:07.597413Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"e8495135083f8257","rtt":"0s","error":"dial tcp 192.168.49.4:2380: connect: no route to host"}
	{"level":"warn","ts":"2025-09-19T22:45:30.851173Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55428","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:45:30.877959Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"192.168.49.4:55454","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:45:30.889351Z","logger":"raft","caller":"v3@v3.6.0/raft.go:1981","msg":"aec36adc501070cc switched to configuration voters=(7185048267463743064 12593026477526642892)"}
	{"level":"info","ts":"2025-09-19T22:45:30.890632Z","caller":"membership/cluster.go:460","msg":"removed member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"e8495135083f8257","removed-remote-peer-urls":["https://192.168.49.4:2380"],"removed-remote-peer-is-learner":false}
	{"level":"info","ts":"2025-09-19T22:45:30.890677Z","caller":"rafthttp/peer.go:316","msg":"stopping remote peer","remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.890748Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:45:30.890772Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.890793Z","caller":"rafthttp/stream.go:285","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:45:30.890802Z","caller":"rafthttp/stream.go:293","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:45:30.890863Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.891077Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","error":"context canceled"}
	{"level":"warn","ts":"2025-09-19T22:45:30.891161Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"e8495135083f8257","error":"failed to read e8495135083f8257 on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2025-09-19T22:45:30.891186Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.891277Z","caller":"rafthttp/stream.go:420","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257","error":"context canceled"}
	{"level":"info","ts":"2025-09-19T22:45:30.891350Z","caller":"rafthttp/stream.go:441","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"aec36adc501070cc","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:45:30.891389Z","caller":"rafthttp/peer.go:321","msg":"stopped remote peer","remote-peer-id":"e8495135083f8257"}
	{"level":"info","ts":"2025-09-19T22:45:30.891424Z","caller":"rafthttp/transport.go:354","msg":"removed remote peer","local-member-id":"aec36adc501070cc","removed-remote-peer-id":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.898371Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"e8495135083f8257"}
	{"level":"warn","ts":"2025-09-19T22:45:30.901479Z","caller":"rafthttp/http.go:396","msg":"rejected stream from remote peer because it was removed","local-member-id":"aec36adc501070cc","remote-peer-id-stream-handler":"aec36adc501070cc","remote-peer-id-from":"e8495135083f8257"}
	
	
	==> kernel <==
	 22:45:42 up  1:28,  0 users,  load average: 0.36, 0.57, 0.58
	Linux ha-984158 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7e1e5941c1568be6947d5879f8b05807535d937790e13f1de20f69c7cb7f0ccd] <==
	I0919 22:44:54.902217       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:44:54.902415       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:44:54.902428       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:45:04.902949       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:45:04.902981       1 main.go:301] handling current node
	I0919 22:45:04.902997       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:45:04.903003       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:45:04.903212       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:45:04.903225       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:45:14.910562       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:45:14.910592       1 main.go:301] handling current node
	I0919 22:45:14.910608       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:45:14.910612       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:45:14.910787       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:45:14.910796       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:45:24.910192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:45:24.910232       1 main.go:301] handling current node
	I0919 22:45:24.910253       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:45:24.910259       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 22:45:24.910469       1 main.go:297] Handling node with IPs: map[192.168.49.4:{}]
	I0919 22:45:24.910478       1 main.go:324] Node ha-984158-m03 has CIDR [10.244.2.0/24] 
	I0919 22:45:34.901935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 22:45:34.901974       1 main.go:301] handling current node
	I0919 22:45:34.901990       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 22:45:34.901994       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [55f2dff5151a84f4cd008a24f42d14ca33e7a01145661ea186681f3cdf3a2645] <==
	I0919 22:38:33.237483       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0919 22:38:33.237492       1 cache.go:39] Caches are synced for autoregister controller
	I0919 22:38:33.244473       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I0919 22:38:33.256040       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I0919 22:38:33.256074       1 policy_source.go:240] refreshing policies
	I0919 22:38:33.258725       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I0919 22:38:33.330813       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 22:38:33.340553       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0919 22:38:33.343923       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0919 22:38:34.057940       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I0919 22:38:34.123968       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0919 22:38:34.654257       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I0919 22:38:36.563731       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 22:38:37.013446       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I0919 22:39:07.528152       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I0919 22:39:58.806991       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:39:59.831450       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:12.701181       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:41:22.300169       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:28.420805       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:42:42.481948       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:43.538989       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:43:45.026909       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:44:54.365379       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:45:11.122450       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [935ae0c237d9726f010f7e037ce2a278c773024b0cd880d3327d7ad757b992ba] <==
	I0919 22:38:36.560524       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 22:38:36.561755       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 22:38:36.563075       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0919 22:38:36.564243       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 22:38:36.565318       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 22:38:36.567600       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 22:38:36.567791       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 22:38:36.567913       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 22:38:36.568459       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 22:38:36.568957       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 22:38:36.577191       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I0919 22:38:36.580467       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I0919 22:38:36.580630       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0919 22:38:36.580760       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158"
	I0919 22:38:36.580809       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m02"
	I0919 22:38:36.580815       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-984158-m03"
	I0919 22:38:36.580872       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 22:38:36.590818       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 22:39:15.982637       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-6rhpz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-6rhpz\": the object has been modified; please apply your changes to the latest version and try again"
	I0919 22:39:15.983309       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"4dd58d83-a50d-4db8-9919-ac6b8b041c9e", APIVersion:"v1", ResourceVersion:"312", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-6rhpz EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-6rhpz": the object has been modified; please apply your changes to the latest version and try again
	E0919 22:45:36.573357       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:45:36.573394       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:45:36.573400       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:45:36.573405       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:45:36.573411       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	
	
	==> kube-proxy [a22f43664887c7fcbb5c6716c9592a2cd654e455fd905f9edd287a2f6c9aba58] <==
	I0919 22:38:34.512575       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:38:34.579894       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:38:34.680953       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:38:34.680992       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:38:34.681200       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:38:34.704454       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:38:34.704534       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:38:34.710440       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:38:34.710834       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:38:34.710880       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:38:34.712458       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:38:34.712504       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:38:34.712543       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:38:34.712552       1 config.go:309] "Starting node config controller"
	I0919 22:38:34.712564       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:38:34.712555       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:38:34.712587       1 config.go:200] "Starting service config controller"
	I0919 22:38:34.712613       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:38:34.812688       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:38:34.812708       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:38:34.812734       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:38:34.812768       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [79c74b643f5a5959b25d582e997875f3399705b3da970e161badc0d1521410a9] <==
	I0919 22:38:28.535240       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:38:33.134307       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 22:38:33.134372       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 22:38:33.134385       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:38:33.134394       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:38:33.174419       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:38:33.174609       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:38:33.180536       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:38:33.180680       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:38:33.184947       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:38:33.185091       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:38:33.284411       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 22:43:36 ha-984158 kubelet[720]: E0919 22:43:36.954234     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321816953914416  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:46 ha-984158 kubelet[720]: E0919 22:43:46.955354     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321826955128886  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:46 ha-984158 kubelet[720]: E0919 22:43:46.955393     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321826955128886  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:56 ha-984158 kubelet[720]: E0919 22:43:56.956480     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321836956221895  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:43:56 ha-984158 kubelet[720]: E0919 22:43:56.956517     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321836956221895  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:06 ha-984158 kubelet[720]: E0919 22:44:06.958459     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321846958195920  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:06 ha-984158 kubelet[720]: E0919 22:44:06.958501     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321846958195920  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:16 ha-984158 kubelet[720]: E0919 22:44:16.959975     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321856959733254  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:16 ha-984158 kubelet[720]: E0919 22:44:16.960016     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321856959733254  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:26 ha-984158 kubelet[720]: E0919 22:44:26.961918     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321866961564924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:26 ha-984158 kubelet[720]: E0919 22:44:26.961955     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321866961564924  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:36 ha-984158 kubelet[720]: E0919 22:44:36.964584     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321876963854129  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:36 ha-984158 kubelet[720]: E0919 22:44:36.964626     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321876963854129  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:46 ha-984158 kubelet[720]: E0919 22:44:46.966592     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321886966345111  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:46 ha-984158 kubelet[720]: E0919 22:44:46.966634     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321886966345111  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:56 ha-984158 kubelet[720]: E0919 22:44:56.968415     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321896968168694  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:44:56 ha-984158 kubelet[720]: E0919 22:44:56.968455     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321896968168694  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:06 ha-984158 kubelet[720]: E0919 22:45:06.969597     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321906969346664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:06 ha-984158 kubelet[720]: E0919 22:45:06.969639     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321906969346664  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:16 ha-984158 kubelet[720]: E0919 22:45:16.971464     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321916971187127  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:16 ha-984158 kubelet[720]: E0919 22:45:16.971505     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321916971187127  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:26 ha-984158 kubelet[720]: E0919 22:45:26.972696     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321926972495462  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:26 ha-984158 kubelet[720]: E0919 22:45:26.972734     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321926972495462  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:36 ha-984158 kubelet[720]: E0919 22:45:36.973935     720 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758321936973692417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 22:45:36 ha-984158 kubelet[720]: E0919 22:45:36.973973     720 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758321936973692417  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-984158 -n ha-984158
helpers_test.go:269: (dbg) Run:  kubectl --context ha-984158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-qctnj
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-984158 describe pod busybox-7b57f96db7-qctnj
helpers_test.go:290: (dbg) kubectl --context ha-984158 describe pod busybox-7b57f96db7-qctnj:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-qctnj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jf9wg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-jf9wg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  15s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  15s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  13s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  13s                default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  13s (x2 over 15s)  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (2.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (1030.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E0919 22:46:52.325383   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:47:58.612161   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:51:52.325645   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:52:58.611252   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:54:21.683322   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:54:55.402147   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:56:52.324295   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:57:58.611663   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:01:52.324642   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:02:58.613306   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: signal: killed (17m7.69403459s)

                                                
                                                
-- stdout --
	* [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	* Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	* Pulling base image v0.0.48 ...
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	* Enabled addons: 
	
	* Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	* Pulling base image v0.0.48 ...
	* Found network options:
	  - NO_PROXY=192.168.49.2
	* Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	  - env NO_PROXY=192.168.49.2
	* Verifying Kubernetes components...
	
	* Starting "ha-984158-m04" worker node in "ha-984158" cluster
	* Pulling base image v0.0.48 ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:46:12.216361  108877 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:46:12.216654  108877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:46:12.216665  108877 out.go:374] Setting ErrFile to fd 2...
	I0919 22:46:12.216669  108877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:46:12.216929  108877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:46:12.217473  108877 out.go:368] Setting JSON to false
	I0919 22:46:12.218412  108877 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5322,"bootTime":1758316650,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:46:12.218505  108877 start.go:140] virtualization: kvm guest
	I0919 22:46:12.220990  108877 out.go:179] * [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:46:12.222652  108877 notify.go:220] Checking for updates...
	I0919 22:46:12.222716  108877 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:46:12.224405  108877 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:46:12.226356  108877 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:46:12.227945  108877 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:46:12.231398  108877 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:46:12.233378  108877 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:46:12.235393  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:12.235929  108877 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:46:12.259440  108877 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:46:12.259601  108877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:46:12.315152  108877 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:46:12.305215381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:46:12.315257  108877 docker.go:318] overlay module found
	I0919 22:46:12.317207  108877 out.go:179] * Using the docker driver based on existing profile
	I0919 22:46:12.318613  108877 start.go:304] selected driver: docker
	I0919 22:46:12.318631  108877 start.go:918] validating driver "docker" against &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false ku
bevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:46:12.318764  108877 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:46:12.318866  108877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:46:12.375932  108877 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:46:12.36611658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:46:12.376654  108877 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:46:12.376683  108877 cni.go:84] Creating CNI manager for ""
	I0919 22:46:12.376742  108877 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:46:12.376800  108877 start.go:348] cluster config:
	{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-devic
e-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:46:12.378603  108877 out.go:179] * Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	I0919 22:46:12.380006  108877 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:46:12.381572  108877 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:46:12.382857  108877 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:46:12.382906  108877 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:46:12.382923  108877 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:46:12.382936  108877 cache.go:58] Caching tarball of preloaded images
	I0919 22:46:12.383039  108877 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:46:12.383055  108877 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:46:12.383212  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:12.403326  108877 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:46:12.403345  108877 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:46:12.403361  108877 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:46:12.403384  108877 start.go:360] acquireMachinesLock for ha-984158: {Name:mkc72a6d4fef468a73a10e88f019b77c34dadd97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:46:12.403455  108877 start.go:364] duration metric: took 45.824µs to acquireMachinesLock for "ha-984158"
	I0919 22:46:12.403473  108877 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:46:12.403482  108877 fix.go:54] fixHost starting: 
	I0919 22:46:12.403690  108877 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:46:12.421194  108877 fix.go:112] recreateIfNeeded on ha-984158: state=Stopped err=<nil>
	W0919 22:46:12.421238  108877 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:46:12.423604  108877 out.go:252] * Restarting existing docker container for "ha-984158" ...
	I0919 22:46:12.423684  108877 cli_runner.go:164] Run: docker start ha-984158
	I0919 22:46:12.673870  108877 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:46:12.696836  108877 kic.go:430] container "ha-984158" state is running.
	I0919 22:46:12.697260  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:46:12.718695  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:12.718941  108877 machine.go:93] provisionDockerMachine start ...
	I0919 22:46:12.719002  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:12.741823  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:12.742061  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:46:12.742077  108877 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:46:12.742802  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51412->127.0.0.1:32838: read: connection reset by peer
	I0919 22:46:15.881704  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:46:15.881745  108877 ubuntu.go:182] provisioning hostname "ha-984158"
	I0919 22:46:15.881804  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:15.901150  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:15.901417  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:46:15.901437  108877 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158 && echo "ha-984158" | sudo tee /etc/hostname
	I0919 22:46:16.050888  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:46:16.050963  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:16.068615  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:16.068892  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:46:16.068914  108877 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:46:16.208904  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:46:16.208934  108877 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:46:16.208956  108877 ubuntu.go:190] setting up certificates
	I0919 22:46:16.208967  108877 provision.go:84] configureAuth start
	I0919 22:46:16.209031  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:46:16.227718  108877 provision.go:143] copyHostCerts
	I0919 22:46:16.227763  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:46:16.227792  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:46:16.227811  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:46:16.227885  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:46:16.227987  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:46:16.228007  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:46:16.228013  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:46:16.228040  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:46:16.228150  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:46:16.228172  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:46:16.228179  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:46:16.228209  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:46:16.228337  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158 san=[127.0.0.1 192.168.49.2 ha-984158 localhost minikube]
	I0919 22:46:16.573002  108877 provision.go:177] copyRemoteCerts
	I0919 22:46:16.573064  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:46:16.573113  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:16.592217  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:16.690168  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:46:16.690236  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:46:16.715223  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:46:16.715291  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:46:16.740942  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:46:16.741005  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:46:16.766354  108877 provision.go:87] duration metric: took 557.37452ms to configureAuth
	I0919 22:46:16.766382  108877 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:46:16.766610  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:16.766705  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:16.786657  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:16.786955  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:46:16.786980  108877 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:46:17.096636  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:46:17.096672  108877 machine.go:96] duration metric: took 4.377714802s to provisionDockerMachine
	I0919 22:46:17.096688  108877 start.go:293] postStartSetup for "ha-984158" (driver="docker")
	I0919 22:46:17.096701  108877 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:46:17.096770  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:46:17.096823  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:17.119671  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:17.218230  108877 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:46:17.221650  108877 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:46:17.221677  108877 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:46:17.221684  108877 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:46:17.221690  108877 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:46:17.221700  108877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:46:17.221764  108877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:46:17.221848  108877 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:46:17.221859  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:46:17.221941  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:46:17.231608  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:46:17.256965  108877 start.go:296] duration metric: took 160.262267ms for postStartSetup
	I0919 22:46:17.257080  108877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:46:17.257142  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:17.275475  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:17.368260  108877 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:46:17.372717  108877 fix.go:56] duration metric: took 4.969233422s for fixHost
	I0919 22:46:17.372745  108877 start.go:83] releasing machines lock for "ha-984158", held for 4.969278s
	I0919 22:46:17.372815  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:46:17.390438  108877 ssh_runner.go:195] Run: cat /version.json
	I0919 22:46:17.390483  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:17.390536  108877 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:46:17.390601  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:17.410661  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:17.410957  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:17.578439  108877 ssh_runner.go:195] Run: systemctl --version
	I0919 22:46:17.583306  108877 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:46:17.724560  108877 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:46:17.729340  108877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:46:17.738652  108877 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:46:17.738736  108877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:46:17.748613  108877 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:46:17.748636  108877 start.go:495] detecting cgroup driver to use...
	I0919 22:46:17.748665  108877 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:46:17.748708  108877 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:46:17.761846  108877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:46:17.774159  108877 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:46:17.774220  108877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:46:17.786916  108877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:46:17.799471  108877 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:46:17.862027  108877 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:46:17.932767  108877 docker.go:234] disabling docker service ...
	I0919 22:46:17.932824  108877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:46:17.946036  108877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:46:17.958434  108877 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:46:18.026742  108877 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:46:18.092388  108877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:46:18.104517  108877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:46:18.122118  108877 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:46:18.122187  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.133296  108877 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:46:18.133358  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.144273  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.154713  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.165450  108877 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:46:18.175471  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.186448  108877 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.196793  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.207323  108877 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:46:18.216504  108877 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:46:18.226278  108877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:46:18.292582  108877 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:46:18.395143  108877 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:46:18.395208  108877 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:46:18.399260  108877 start.go:563] Will wait 60s for crictl version
	I0919 22:46:18.399345  108877 ssh_runner.go:195] Run: which crictl
	I0919 22:46:18.403306  108877 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:46:18.439273  108877 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:46:18.439358  108877 ssh_runner.go:195] Run: crio --version
	I0919 22:46:18.477736  108877 ssh_runner.go:195] Run: crio --version
	I0919 22:46:18.517625  108877 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:46:18.519401  108877 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:46:18.538950  108877 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:46:18.543029  108877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:46:18.555164  108877 kubeadm.go:875] updating cluster {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:46:18.555281  108877 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:46:18.555321  108877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:46:18.602120  108877 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:46:18.602145  108877 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:46:18.602190  108877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:46:18.638063  108877 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:46:18.638085  108877 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:46:18.638096  108877 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:46:18.638217  108877 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:46:18.638289  108877 ssh_runner.go:195] Run: crio config
	I0919 22:46:18.682755  108877 cni.go:84] Creating CNI manager for ""
	I0919 22:46:18.682776  108877 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:46:18.682785  108877 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:46:18.682804  108877 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-984158 NodeName:ha-984158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:46:18.682949  108877 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-984158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:46:18.682971  108877 kube-vip.go:115] generating kube-vip config ...
	I0919 22:46:18.683023  108877 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:46:18.695680  108877 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:46:18.695771  108877 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:46:18.695831  108877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:46:18.704995  108877 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:46:18.705090  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:46:18.714229  108877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0919 22:46:18.732876  108877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:46:18.751654  108877 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0919 22:46:18.771660  108877 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:46:18.791347  108877 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:46:18.795300  108877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:46:18.807294  108877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:46:18.870326  108877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:46:18.890598  108877 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.2
	I0919 22:46:18.890622  108877 certs.go:194] generating shared ca certs ...
	I0919 22:46:18.890642  108877 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:18.890820  108877 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:46:18.890875  108877 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:46:18.890884  108877 certs.go:256] generating profile certs ...
	I0919 22:46:18.890988  108877 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:46:18.891026  108877 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.cd8db51d
	I0919 22:46:18.891041  108877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.cd8db51d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:46:19.605865  108877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.cd8db51d ...
	I0919 22:46:19.605953  108877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.cd8db51d: {Name:mk7f25dd3beb69a2627b32c86fa05a4a9f1ad6c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:19.606168  108877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.cd8db51d ...
	I0919 22:46:19.606186  108877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.cd8db51d: {Name:mk8f6bf1f9253215ea3b4b09434f0ad297843936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:19.606312  108877 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.cd8db51d -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:46:19.606498  108877 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.cd8db51d -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:46:19.606699  108877 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:46:19.606716  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:46:19.606735  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:46:19.606749  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:46:19.606766  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:46:19.606780  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:46:19.606794  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:46:19.606807  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:46:19.606821  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:46:19.606887  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:46:19.606926  108877 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:46:19.606936  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:46:19.606966  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:46:19.606994  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:46:19.607023  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:46:19.607083  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:46:19.607136  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:19.607156  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:46:19.607172  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:46:19.608038  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:46:19.647257  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:46:19.680445  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:46:19.707341  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:46:19.732949  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:46:19.759195  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:46:19.784760  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:46:19.811628  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:46:19.838352  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:46:19.864825  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:46:19.890278  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:46:19.916079  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:46:19.935550  108877 ssh_runner.go:195] Run: openssl version
	I0919 22:46:19.941303  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:46:19.951377  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:46:19.955360  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:46:19.955418  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:46:19.962652  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:46:19.972203  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:46:19.984692  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:46:19.989793  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:46:19.989856  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:46:19.997255  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:46:20.007365  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:46:20.018217  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:20.022308  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:20.022372  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:20.029407  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:46:20.039229  108877 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:46:20.043177  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:46:20.050319  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:46:20.057484  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:46:20.064329  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:46:20.071249  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:46:20.078085  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:46:20.084889  108877 kubeadm.go:392] StartCluster: {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:46:20.085014  108877 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:46:20.085084  108877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:46:20.126875  108877 cri.go:89] found id: "965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461"
	I0919 22:46:20.126897  108877 cri.go:89] found id: "59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce"
	I0919 22:46:20.126904  108877 cri.go:89] found id: "e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6"
	I0919 22:46:20.126908  108877 cri.go:89] found id: "28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0"
	I0919 22:46:20.126913  108877 cri.go:89] found id: "8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2"
	I0919 22:46:20.126919  108877 cri.go:89] found id: ""
	I0919 22:46:20.126969  108877 ssh_runner.go:195] Run: sudo runc list -f json
	I0919 22:46:20.152531  108877 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0","pid":821,"status":"running","bundle":"/run/containers/storage/overlay-containers/28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0/userdata","rootfs":"/var/lib/containers/storage/overlay/cc7fd9c1671034c7ec28c804e89098f3430de08294ed80b7199c664a5f72ba8e/merged","created":"2025-09-19T22:46:19.545218383Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMes
sagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:46:19.459120421Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17c8e4bb866faa0106347d8b7bccd341\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-vip-ha-984158_17c8e4bb866faa0106347d8b7bccd341/kube-vip/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/l
ib/containers/storage/overlay/cc7fd9c1671034c7ec28c804e89098f3430de08294ed80b7199c664a5f72ba8e/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d6df5b205fc00249d0e9590a985ea3a627fb8001b0cb30fb23590ca88bed9d95/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d6df5b205fc00249d0e9590a985ea3a627fb8001b0cb30fb23590ca88bed9d95","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/17c8e4bb866faa0106347d8b7bccd341/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubele
t/pods/17c8e4bb866faa0106347d8b7bccd341/containers/kube-vip/0984bd68\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.hash":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.seen":"2025-09-19T22:46:18.961795051Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce","pid":836,"status":"running","bundle":"/r
un/containers/storage/overlay-containers/59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce/userdata","rootfs":"/var/lib/containers/storage/overlay/b7769c0dc387db2817a2192fae2ca0b5ab06b67506fd54e81eac8724cada8d35/merged","created":"2025-09-19T22:46:19.553698427Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.te
rminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:46:19.485443854Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17a21a02ffe1f8dd7b43dae71452cdad\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ha-984158_17a21a02ffe1f8dd7b43dae71452cdad/kube-scheduler/
2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b7769c0dc387db2817a2192fae2ca0b5ab06b67506fd54e81eac8724cada8d35/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b17e0b9c519a3a36026153f88111e79a608ff665648c4474defb58b5cfaf6d8b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b17e0b9c519a3a36026153f88111e79a608ff665648c4474defb58b5cfaf6d8b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/etc-hosts\
",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/containers/kube-scheduler/0619f79b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.hash":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.seen":"2025-09-19T22:46:18.961806595Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000
000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2","pid":822,"status":"running","bundle":"/run/containers/storage/overlay-containers/8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2/userdata","rootfs":"/var/lib/containers/storage/overlay/c5a33b5e3ac31267e2463538a0ad9e67be17ebbfb905e94c0e9d15a43a37fdfe/merged","created":"2025-09-19T22:46:19.545704104Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPo
rt\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:46:19.449271076Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b69a60c29223dc4628f1e45acc16ccdb\"}","io.kubernete
s.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-984158_b69a60c29223dc4628f1e45acc16ccdb/etcd/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c5a33b5e3ac31267e2463538a0ad9e67be17ebbfb905e94c0e9d15a43a37fdfe/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/aca77eab195341f9bfeee850a0984b8ce26c195495117003bb595b13a5af2680/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"aca77eab195341f9bfeee850a0984b8ce26c195495117003bb595b13a5af2680","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib
/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/containers/etcd/34e2f8ea\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b69a60c29223dc4628f1e45acc16ccdb","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"b69a60c29223dc4628f1e45acc16ccdb","kubernetes.io/config.seen":"2025-09-19T2
2:46:18.961800811Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461","pid":840,"status":"running","bundle":"/run/containers/storage/overlay-containers/965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461/userdata","rootfs":"/var/lib/containers/storage/overlay/f113fa372b328daab74d27019751ddfd1ddb9a1d158a4e75b95a7c52c405c6c4/merged","created":"2025-09-19T22:46:19.554821732Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.co
ntainer.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:46:19.489012262Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d
2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a8e2ca3a88a914207b16de44248445e2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-984158_a8e2ca3a88a914207b16de44248445e2/kube-apiserver/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f113fa372b328daab74d27019751ddfd1ddb9a1d158a4e75b95a7c52c405c6c4/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7a2352c9ba15d31b2b729265d3a26885bb44ca45f6d9f3b7e775f939fb89cc25/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7a2352c9ba15d31b2b729265d3a26885bb44ca45f6d9f3b7e775f939fb
89cc25","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/containers/kube-apiserver/625662ae\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_
path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a8e2ca3a88a914207b16de44248445e2","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"a8e2ca3a88a914207b16de44248445e2","kubernetes.io/config.seen":"2025-09-19T22:46:18.961803043Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.propert
y.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6","pid":852,"status":"running","bundle":"/run/containers/storage/overlay-containers/e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6/userdata","rootfs":"/var/lib/containers/storage/overlay/c264b914bf6139ef613a0cc00f27820455e1cb24f62d2c566377ad12d2382849/merged","created":"2025-09-19T22:46:19.556712502Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.contain
er.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:46:19.474613868Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"
kube-controller-manager-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"560e6b05a580a11369967b27d393af16\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-984158_560e6b05a580a11369967b27d393af16/kube-controller-manager/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c264b914bf6139ef613a0cc00f27820455e1cb24f62d2c566377ad12d2382849/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-984158_kube-system_560e6b05a580a11369967b27d393af16_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/de818e09e70e234296b86f4c43c58dcd49c79f8617daea40d4324baf6ff48cc9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"de818e09e70e234296b86f4c43c58dcd49c79f8617daea40d4324baf6ff48cc9","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-984158_kube-system_560e6
b05a580a11369967b27d393af16_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/containers/kube-controller-manager/4ae132f8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.co
nf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"560e6b05a580a11369967b27d393af16","kubernetes.io/config.hash":"560e6b05a580a1136996
7b27d393af16","kubernetes.io/config.seen":"2025-09-19T22:46:18.961804962Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0919 22:46:20.153032  108877 cri.go:126] list returned 5 containers
	I0919 22:46:20.153055  108877 cri.go:129] container: {ID:28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0 Status:running}
	I0919 22:46:20.153074  108877 cri.go:135] skipping {28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0 running}: state = "running", want "paused"
	I0919 22:46:20.153090  108877 cri.go:129] container: {ID:59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce Status:running}
	I0919 22:46:20.153097  108877 cri.go:135] skipping {59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce running}: state = "running", want "paused"
	I0919 22:46:20.153120  108877 cri.go:129] container: {ID:8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2 Status:running}
	I0919 22:46:20.153126  108877 cri.go:135] skipping {8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2 running}: state = "running", want "paused"
	I0919 22:46:20.153136  108877 cri.go:129] container: {ID:965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461 Status:running}
	I0919 22:46:20.153144  108877 cri.go:135] skipping {965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461 running}: state = "running", want "paused"
	I0919 22:46:20.153152  108877 cri.go:129] container: {ID:e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6 Status:running}
	I0919 22:46:20.153159  108877 cri.go:135] skipping {e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6 running}: state = "running", want "paused"
	I0919 22:46:20.153217  108877 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:46:20.163798  108877 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:46:20.163821  108877 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:46:20.163868  108877 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:46:20.173357  108877 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:46:20.173815  108877 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-984158" does not appear in /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:46:20.173926  108877 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14668/kubeconfig needs updating (will repair): [kubeconfig missing "ha-984158" cluster setting kubeconfig missing "ha-984158" context setting]
	I0919 22:46:20.174266  108877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:20.174929  108877 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:46:20.175466  108877 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:46:20.175485  108877 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:46:20.175491  108877 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:46:20.175496  108877 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:46:20.175508  108877 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:46:20.175532  108877 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:46:20.175951  108877 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:46:20.185275  108877 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:46:20.185304  108877 kubeadm.go:593] duration metric: took 21.472405ms to restartPrimaryControlPlane
	I0919 22:46:20.185315  108877 kubeadm.go:394] duration metric: took 100.433015ms to StartCluster
	I0919 22:46:20.185333  108877 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:20.185409  108877 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:46:20.186087  108877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:20.186338  108877 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:46:20.186370  108877 start.go:241] waiting for startup goroutines ...
	I0919 22:46:20.186378  108877 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:46:20.186635  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:20.189653  108877 out.go:179] * Enabled addons: 
	I0919 22:46:20.191192  108877 addons.go:514] duration metric: took 4.807431ms for enable addons: enabled=[]
	I0919 22:46:20.191234  108877 start.go:246] waiting for cluster config update ...
	I0919 22:46:20.191247  108877 start.go:255] writing updated cluster config ...
	I0919 22:46:20.193246  108877 out.go:203] 
	I0919 22:46:20.195195  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:20.195308  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:20.197094  108877 out.go:179] * Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	I0919 22:46:20.198374  108877 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:46:20.199729  108877 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:46:20.200930  108877 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:46:20.200958  108877 cache.go:58] Caching tarball of preloaded images
	I0919 22:46:20.200957  108877 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:46:20.201052  108877 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:46:20.201070  108877 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:46:20.201207  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:20.225517  108877 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:46:20.225538  108877 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:46:20.225559  108877 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:46:20.225589  108877 start.go:360] acquireMachinesLock for ha-984158-m02: {Name:mk33ccd18791cf0a87d18f7af68677fa10224c04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:46:20.225650  108877 start.go:364] duration metric: took 41.873µs to acquireMachinesLock for "ha-984158-m02"
	I0919 22:46:20.225673  108877 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:46:20.225679  108877 fix.go:54] fixHost starting: m02
	I0919 22:46:20.225965  108877 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:46:20.246500  108877 fix.go:112] recreateIfNeeded on ha-984158-m02: state=Stopped err=<nil>
	W0919 22:46:20.246530  108877 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:46:20.248555  108877 out.go:252] * Restarting existing docker container for "ha-984158-m02" ...
	I0919 22:46:20.248640  108877 cli_runner.go:164] Run: docker start ha-984158-m02
	I0919 22:46:20.515186  108877 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:46:20.536840  108877 kic.go:430] container "ha-984158-m02" state is running.
	I0919 22:46:20.537225  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:46:20.557968  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:20.558248  108877 machine.go:93] provisionDockerMachine start ...
	I0919 22:46:20.558317  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:20.577500  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:20.577734  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:46:20.577750  108877 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:46:20.578405  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37082->127.0.0.1:32843: read: connection reset by peer
	I0919 22:46:23.726391  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:46:23.726419  108877 ubuntu.go:182] provisioning hostname "ha-984158-m02"
	I0919 22:46:23.726483  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:23.757624  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:23.757898  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:46:23.757918  108877 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m02 && echo "ha-984158-m02" | sudo tee /etc/hostname
	I0919 22:46:23.968819  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:46:23.968912  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:24.000480  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:24.000783  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:46:24.000820  108877 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:46:24.160931  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:46:24.160963  108877 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:46:24.160983  108877 ubuntu.go:190] setting up certificates
	I0919 22:46:24.160993  108877 provision.go:84] configureAuth start
	I0919 22:46:24.161046  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:46:24.183569  108877 provision.go:143] copyHostCerts
	I0919 22:46:24.183623  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:46:24.183664  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:46:24.183673  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:46:24.183765  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:46:24.183860  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:46:24.183887  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:46:24.183893  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:46:24.183935  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:46:24.184016  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:46:24.184042  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:46:24.184052  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:46:24.184119  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:46:24.184203  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m02 san=[127.0.0.1 192.168.49.3 ha-984158-m02 localhost minikube]
	I0919 22:46:24.480167  108877 provision.go:177] copyRemoteCerts
	I0919 22:46:24.480231  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:46:24.480275  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:24.498555  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:46:24.595854  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:46:24.595918  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:46:24.622515  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:46:24.622579  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:46:24.649573  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:46:24.649635  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:46:24.676266  108877 provision.go:87] duration metric: took 515.262319ms to configureAuth
	I0919 22:46:24.676306  108877 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:46:24.676727  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:24.676896  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:24.696841  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:24.697083  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:46:24.697124  108877 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:46:25.086955  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:46:25.086982  108877 machine.go:96] duration metric: took 4.528716196s to provisionDockerMachine
	I0919 22:46:25.086996  108877 start.go:293] postStartSetup for "ha-984158-m02" (driver="docker")
	I0919 22:46:25.087011  108877 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:46:25.087070  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:46:25.087137  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:25.112242  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:46:25.223680  108877 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:46:25.229296  108877 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:46:25.229349  108877 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:46:25.229360  108877 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:46:25.229368  108877 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:46:25.229381  108877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:46:25.229444  108877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:46:25.229556  108877 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:46:25.229575  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:46:25.229692  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:46:25.242078  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:46:25.278732  108877 start.go:296] duration metric: took 191.719996ms for postStartSetup
	I0919 22:46:25.278817  108877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:46:25.278874  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:25.304273  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:46:25.421654  108877 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:46:25.445209  108877 fix.go:56] duration metric: took 5.219521661s for fixHost
	I0919 22:46:25.445242  108877 start.go:83] releasing machines lock for "ha-984158-m02", held for 5.219578683s
	I0919 22:46:25.445316  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:46:25.479010  108877 out.go:179] * Found network options:
	I0919 22:46:25.480818  108877 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:46:25.482353  108877 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:46:25.482414  108877 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:46:25.482511  108877 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:46:25.482570  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:25.482798  108877 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:46:25.482835  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:25.512551  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:46:25.514422  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:46:25.765126  108877 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:46:25.771133  108877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:46:25.782542  108877 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:46:25.782668  108877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:46:25.793362  108877 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:46:25.793384  108877 start.go:495] detecting cgroup driver to use...
	I0919 22:46:25.793413  108877 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:46:25.793446  108877 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:46:25.806649  108877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:46:25.820020  108877 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:46:25.820150  108877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:46:25.834424  108877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:46:25.846716  108877 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:46:25.979915  108877 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:46:26.160733  108877 docker.go:234] disabling docker service ...
	I0919 22:46:26.160800  108877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:46:26.180405  108877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:46:26.193545  108877 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:46:26.323966  108877 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:46:26.454055  108877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:46:26.471608  108877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:46:26.491683  108877 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:46:26.491759  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.503650  108877 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:46:26.503736  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.515882  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.528470  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.540665  108877 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:46:26.551186  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.562785  108877 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.576061  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.588723  108877 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:46:26.598603  108877 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:46:26.608664  108877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:46:26.737407  108877 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:46:27.001831  108877 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:46:27.001907  108877 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:46:27.006455  108877 start.go:563] Will wait 60s for crictl version
	I0919 22:46:27.006516  108877 ssh_runner.go:195] Run: which crictl
	I0919 22:46:27.010137  108877 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:46:27.048954  108877 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:46:27.049041  108877 ssh_runner.go:195] Run: crio --version
	I0919 22:46:27.089149  108877 ssh_runner.go:195] Run: crio --version
	I0919 22:46:27.131001  108877 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:46:27.133021  108877 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:46:27.135238  108877 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:46:27.153890  108877 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:46:27.158228  108877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:46:27.171318  108877 mustload.go:65] Loading cluster: ha-984158
	I0919 22:46:27.171533  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:27.171738  108877 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:46:27.193596  108877 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:46:27.193834  108877 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.3
	I0919 22:46:27.193846  108877 certs.go:194] generating shared ca certs ...
	I0919 22:46:27.193859  108877 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:27.193962  108877 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:46:27.194001  108877 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:46:27.194010  108877 certs.go:256] generating profile certs ...
	I0919 22:46:27.194079  108877 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:46:27.194165  108877 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648
	I0919 22:46:27.194224  108877 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:46:27.194238  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:46:27.194253  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:46:27.194265  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:46:27.194278  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:46:27.194297  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:46:27.194310  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:46:27.194323  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:46:27.194339  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:46:27.194411  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:46:27.194441  108877 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:46:27.194450  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:46:27.194471  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:46:27.194492  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:46:27.194516  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:46:27.194565  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:46:27.194590  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:27.194602  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:46:27.194615  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:46:27.194657  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:27.213026  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:27.304420  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:46:27.312939  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:46:27.335604  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:46:27.340863  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:46:27.358586  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:46:27.362601  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:46:27.378555  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:46:27.383438  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:46:27.400539  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:46:27.404743  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:46:27.422198  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:46:27.427849  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:46:27.444001  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:46:27.474378  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:46:27.505532  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:46:27.533300  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:46:27.561118  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:46:27.590142  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:46:27.618324  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:46:27.647550  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:46:27.676298  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:46:27.707393  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:46:27.746925  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:46:27.783270  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:46:27.808955  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:46:27.835057  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:46:27.859054  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:46:27.883780  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:46:27.905848  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:46:27.929554  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:46:27.949528  108877 ssh_runner.go:195] Run: openssl version
	I0919 22:46:27.955293  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:46:27.966171  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:46:27.970845  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:46:27.970917  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:46:27.978879  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:46:27.988983  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:46:27.999569  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:46:28.004058  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:46:28.004197  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:46:28.011324  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:46:28.022191  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:46:28.033554  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:28.037405  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:28.037468  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:28.044623  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:46:28.054934  108877 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:46:28.059006  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:46:28.066671  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:46:28.074789  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:46:28.083169  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:46:28.090396  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:46:28.097472  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:46:28.104903  108877 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0919 22:46:28.105012  108877 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:46:28.105038  108877 kube-vip.go:115] generating kube-vip config ...
	I0919 22:46:28.105077  108877 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:46:28.118386  108877 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:46:28.118444  108877 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:46:28.118499  108877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:46:28.128992  108877 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:46:28.129066  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:46:28.138683  108877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:46:28.157570  108877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:46:28.179790  108877 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:46:28.199021  108877 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:46:28.203009  108877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:46:28.215658  108877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:46:28.329589  108877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:46:28.341890  108877 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:46:28.342184  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:28.345551  108877 out.go:179] * Verifying Kubernetes components...
	I0919 22:46:28.347145  108877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:46:28.465160  108877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:46:28.480699  108877 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:46:28.480762  108877 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:46:28.480949  108877 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m02" to be "Ready" ...
	I0919 22:46:28.489433  108877 node_ready.go:49] node "ha-984158-m02" is "Ready"
	I0919 22:46:28.489464  108877 node_ready.go:38] duration metric: took 8.500754ms for node "ha-984158-m02" to be "Ready" ...
	I0919 22:46:28.489478  108877 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:46:28.489524  108877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:46:28.502734  108877 api_server.go:72] duration metric: took 160.79998ms to wait for apiserver process to appear ...
	I0919 22:46:28.502770  108877 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:46:28.502793  108877 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:46:28.508523  108877 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:46:28.509513  108877 api_server.go:141] control plane version: v1.34.0
	I0919 22:46:28.509537  108877 api_server.go:131] duration metric: took 6.759754ms to wait for apiserver health ...
	I0919 22:46:28.509545  108877 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:46:28.520364  108877 system_pods.go:59] 24 kube-system pods found
	I0919 22:46:28.520531  108877 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:46:28.520549  108877 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:46:28.520562  108877 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:46:28.520572  108877 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:46:28.520578  108877 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:46:28.520583  108877 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:46:28.520651  108877 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:46:28.520667  108877 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:46:28.520700  108877 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:46:28.520727  108877 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:46:28.520733  108877 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:46:28.520743  108877 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:46:28.520750  108877 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:46:28.520756  108877 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:46:28.520761  108877 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:46:28.520790  108877 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:46:28.520804  108877 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:46:28.520821  108877 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:46:28.520838  108877 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:46:28.520885  108877 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:46:28.520900  108877 system_pods.go:61] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:46:28.520907  108877 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:46:28.520913  108877 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:46:28.520918  108877 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:46:28.520964  108877 system_pods.go:74] duration metric: took 11.374418ms to wait for pod list to return data ...
	I0919 22:46:28.520985  108877 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:46:28.533346  108877 default_sa.go:45] found service account: "default"
	I0919 22:46:28.533374  108877 default_sa.go:55] duration metric: took 12.372821ms for default service account to be created ...
	I0919 22:46:28.533386  108877 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:46:28.540037  108877 system_pods.go:86] 24 kube-system pods found
	I0919 22:46:28.540077  108877 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:46:28.540086  108877 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:46:28.540093  108877 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:46:28.540130  108877 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:46:28.540138  108877 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:46:28.540143  108877 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:46:28.540148  108877 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:46:28.540153  108877 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:46:28.540160  108877 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:46:28.540167  108877 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:46:28.540171  108877 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:46:28.540177  108877 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:46:28.540186  108877 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:46:28.540190  108877 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:46:28.540197  108877 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:46:28.540201  108877 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:46:28.540206  108877 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:46:28.540211  108877 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:46:28.540216  108877 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:46:28.540224  108877 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:46:28.540228  108877 system_pods.go:89] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:46:28.540231  108877 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:46:28.540234  108877 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:46:28.540237  108877 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:46:28.540244  108877 system_pods.go:126] duration metric: took 6.851735ms to wait for k8s-apps to be running ...
	I0919 22:46:28.540253  108877 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:46:28.540297  108877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:46:28.553240  108877 system_svc.go:56] duration metric: took 12.975587ms WaitForService to wait for kubelet
	I0919 22:46:28.553269  108877 kubeadm.go:578] duration metric: took 211.340401ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:46:28.553284  108877 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:46:28.556598  108877 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:46:28.556630  108877 node_conditions.go:123] node cpu capacity is 8
	I0919 22:46:28.556644  108877 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:46:28.556649  108877 node_conditions.go:123] node cpu capacity is 8
	I0919 22:46:28.556655  108877 node_conditions.go:105] duration metric: took 3.365055ms to run NodePressure ...
	I0919 22:46:28.556668  108877 start.go:241] waiting for startup goroutines ...
	I0919 22:46:28.556700  108877 start.go:255] writing updated cluster config ...
	I0919 22:46:28.559365  108877 out.go:203] 
	I0919 22:46:28.561049  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:28.561185  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:28.563177  108877 out.go:179] * Starting "ha-984158-m04" worker node in "ha-984158" cluster
	I0919 22:46:28.565659  108877 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:46:28.567464  108877 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:46:28.569620  108877 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:46:28.569658  108877 cache.go:58] Caching tarball of preloaded images
	I0919 22:46:28.569731  108877 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:46:28.569801  108877 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:46:28.569818  108877 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:46:28.570024  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:28.591350  108877 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:46:28.591370  108877 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:46:28.591387  108877 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:46:28.591426  108877 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:46:28.591497  108877 start.go:364] duration metric: took 50.571µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:46:28.591521  108877 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:46:28.591532  108877 fix.go:54] fixHost starting: m04
	I0919 22:46:28.591813  108877 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:46:28.611528  108877 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Stopped err=<nil>
	W0919 22:46:28.611563  108877 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:46:28.614509  108877 out.go:252] * Restarting existing docker container for "ha-984158-m04" ...
	I0919 22:46:28.614597  108877 cli_runner.go:164] Run: docker start ha-984158-m04
	I0919 22:46:28.891706  108877 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:46:28.909967  108877 kic.go:430] container "ha-984158-m04" state is running.
	I0919 22:46:28.910342  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:46:28.932443  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:28.932786  108877 machine.go:93] provisionDockerMachine start ...
	I0919 22:46:28.932866  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:46:28.952373  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:28.952595  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:46:28.952610  108877 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:46:28.953238  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58822->127.0.0.1:32848: read: connection reset by peer
	I0919 22:46:31.992434  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:35.030266  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:38.067360  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:41.104918  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:44.141546  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:47.180486  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:50.217377  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:53.253625  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:56.290653  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:59.328801  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:02.366335  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:05.404369  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:08.441218  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:11.478040  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:14.517286  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:17.555077  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:20.590775  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:23.628032  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:26.665685  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:29.702885  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:32.739715  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:35.776525  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:38.813501  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:41.850732  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:44.888481  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:47.925335  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:50.962781  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:54.001543  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:57.039219  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:00.076356  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:03.114796  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:06.153655  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:09.190434  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:12.228005  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:15.266169  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:18.303890  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:21.341005  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:24.378793  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:27.415263  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:30.452522  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:33.489642  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:36.526922  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:39.565971  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:42.604085  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:45.642453  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:48.680255  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:51.718847  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:54.755973  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:57.794058  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:00.830885  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:03.867241  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:06.905329  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:09.942692  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:12.979007  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:16.016090  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:19.052722  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:22.090878  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:25.128424  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:28.164980  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:31.165214  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:49:31.165246  108877 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 22:49:31.165343  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:49:31.186337  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:49:31.186559  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:49:31.186572  108877 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m04 && echo "ha-984158-m04" | sudo tee /etc/hostname
	I0919 22:49:31.222041  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:34.260373  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:37.296429  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:40.333177  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:43.369818  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:46.407326  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:49.445653  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:52.481801  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:55.519688  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:58.556954  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:01.594704  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:04.631982  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:07.671074  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:10.707659  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:13.743738  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:16.780434  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:19.818728  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:22.856313  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:25.895572  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:28.933186  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:31.971330  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:35.009667  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:38.046268  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:41.085862  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:44.123308  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:47.162970  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:50.201144  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:53.237530  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:56.277091  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:59.315180  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:02.352338  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:05.393758  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:08.429927  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:11.467205  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:14.505353  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:17.541220  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:20.577894  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:23.615430  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:26.652560  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:29.692571  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:32.729044  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:35.767750  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:38.804231  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:41.841785  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:44.879839  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:47.915818  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:50.951715  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:53.987351  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:57.023082  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:00.061237  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:03.100535  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:06.138086  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:09.175795  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:12.212791  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:15.251952  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:18.287922  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:21.324066  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:24.361564  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:27.399277  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:30.435299  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:33.436225  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:52:33.436347  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:52:33.457501  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:52:33.457777  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:52:33.457803  108877 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:52:33.496865  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:36.535257  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:39.572013  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:42.609779  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:45.647614  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:48.684847  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:51.721701  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:54.759074  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:57.796080  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:00.832366  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:03.868817  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:06.905935  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:09.942978  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:12.979706  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:16.016675  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:19.056677  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:22.094715  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:25.132270  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:28.169494  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:31.206055  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:34.243569  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:37.279505  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:40.316595  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:43.353466  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.390229  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:49.429152  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:52.466242  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:55.505090  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.542171  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.579326  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:04.618595  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:07.655460  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:10.694154  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:13.730639  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:16.768164  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:19.806285  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:22.841871  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:25.880314  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:28.916546  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:31.954063  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:34.990475  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:38.028210  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:41.064783  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:44.103312  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:47.140700  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:50.178632  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:53.215692  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:56.252602  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:59.291839  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:02.328340  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:05.366220  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:08.404407  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:11.444828  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:14.482607  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:17.519054  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:20.556896  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:23.594412  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:26.631873  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:29.668984  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:32.707149  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:35.708797  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:55:35.708827  108877 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:55:35.708848  108877 ubuntu.go:190] setting up certificates
	I0919 22:55:35.708859  108877 provision.go:84] configureAuth start
	I0919 22:55:35.708915  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:35.730835  108877 provision.go:143] copyHostCerts
	I0919 22:55:35.730877  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:35.730913  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:35.730922  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:35.731023  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:35.731145  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:35.731168  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:35.731175  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:35.731212  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:35.731268  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:35.731288  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:35.731295  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:35.731320  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:35.731382  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:36.000694  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:36.000754  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:36.000792  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:36.019214  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:36.055827  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:36.055873  108877 retry.go:31] will retry after 182.097125ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:36.274693  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:36.274733  108877 retry.go:31] will retry after 386.768315ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:36.698187  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:36.698226  108877 retry.go:31] will retry after 362.057256ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:37.098814  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:37.098849  108877 retry.go:31] will retry after 787.271133ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:37.923015  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:37.923091  108877 provision.go:87] duration metric: took 2.21422803s to configureAuth
	W0919 22:55:37.923097  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:37.923153  108877 retry.go:31] will retry after 82.874µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:37.924303  108877 provision.go:84] configureAuth start
	I0919 22:55:37.924373  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:37.943722  108877 provision.go:143] copyHostCerts
	I0919 22:55:37.943762  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:37.943800  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:37.943812  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:37.943881  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:37.943977  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:37.944003  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:37.944013  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:37.944047  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:37.944176  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:37.944202  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:37.944212  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:37.944250  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:37.944357  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:38.121946  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:38.122004  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:38.122068  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:38.140663  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:38.177846  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:38.177874  108877 retry.go:31] will retry after 202.591135ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:38.418642  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:38.418669  108877 retry.go:31] will retry after 500.457311ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:38.956500  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:38.956544  108877 retry.go:31] will retry after 832.609802ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:39.826083  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:39.826197  108877 provision.go:87] duration metric: took 1.901874989s to configureAuth
	W0919 22:55:39.826209  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:39.826224  108877 retry.go:31] will retry after 191.755µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:39.827360  108877 provision.go:84] configureAuth start
	I0919 22:55:39.827427  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:39.845574  108877 provision.go:143] copyHostCerts
	I0919 22:55:39.845617  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:39.845646  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:39.845655  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:39.845715  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:39.845813  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:39.845833  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:39.845840  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:39.845863  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:39.845922  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:39.845939  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:39.845945  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:39.845964  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:39.846040  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:39.978299  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:39.978353  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:39.978404  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:39.996929  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:40.036821  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:40.036849  108877 retry.go:31] will retry after 355.315524ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:40.430448  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:40.430479  108877 retry.go:31] will retry after 524.043693ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:40.995748  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:40.995788  108877 retry.go:31] will retry after 825.079811ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:41.857396  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:41.857496  108877 provision.go:87] duration metric: took 2.030120822s to configureAuth
	W0919 22:55:41.857504  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:41.857517  108877 retry.go:31] will retry after 196.455µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:41.858679  108877 provision.go:84] configureAuth start
	I0919 22:55:41.858761  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:41.877440  108877 provision.go:143] copyHostCerts
	I0919 22:55:41.877476  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:41.877504  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:41.877510  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:41.877569  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:41.877646  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:41.877664  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:41.877671  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:41.877692  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:41.877735  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:41.877752  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:41.877757  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:41.877775  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:41.877893  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:42.172702  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:42.172767  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:42.172802  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:42.191680  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:42.229220  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:42.229251  108877 retry.go:31] will retry after 337.452362ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:42.604511  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:42.604549  108877 retry.go:31] will retry after 484.976043ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:43.128620  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:43.128659  108877 retry.go:31] will retry after 309.196582ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:43.475021  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:43.475061  108877 retry.go:31] will retry after 537.150728ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:44.048722  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.048824  108877 provision.go:87] duration metric: took 2.190120686s to configureAuth
	W0919 22:55:44.048837  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.048852  108877 retry.go:31] will retry after 485.508µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.049993  108877 provision.go:84] configureAuth start
	I0919 22:55:44.050139  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:44.068794  108877 provision.go:143] copyHostCerts
	I0919 22:55:44.068840  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:44.068876  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:44.068888  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:44.068955  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:44.069097  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:44.069161  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:44.069170  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:44.069213  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:44.069302  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:44.069327  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:44.069334  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:44.069367  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:44.069465  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:44.149950  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:44.150042  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:44.150080  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:44.170311  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:44.208034  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.208067  108877 retry.go:31] will retry after 317.83838ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:44.562094  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.562156  108877 retry.go:31] will retry after 368.430243ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:44.966948  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.966999  108877 retry.go:31] will retry after 300.011867ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:45.302980  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:45.303022  108877 retry.go:31] will retry after 670.167345ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:46.008703  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.008777  108877 provision.go:87] duration metric: took 1.958765521s to configureAuth
	W0919 22:55:46.008786  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.008795  108877 retry.go:31] will retry after 402.409µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.009909  108877 provision.go:84] configureAuth start
	I0919 22:55:46.009981  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:46.028169  108877 provision.go:143] copyHostCerts
	I0919 22:55:46.028208  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:46.028244  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:46.028257  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:46.028319  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:46.028426  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:46.028453  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:46.028460  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:46.028494  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:46.028559  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:46.028584  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:46.028593  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:46.028622  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:46.028752  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:46.085067  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:46.085149  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:46.085194  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:46.104771  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:46.141286  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.141321  108877 retry.go:31] will retry after 207.521471ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:46.387207  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.387241  108877 retry.go:31] will retry after 188.974379ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:46.613516  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.613549  108877 retry.go:31] will retry after 623.504755ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:47.274171  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:47.274247  108877 retry.go:31] will retry after 293.739201ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:47.568796  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:47.587183  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:47.626566  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:47.626603  108877 retry.go:31] will retry after 297.290434ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:47.959843  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:47.959875  108877 retry.go:31] will retry after 308.614989ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:48.306199  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:48.306228  108877 retry.go:31] will retry after 332.873983ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:48.677794  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:48.677820  108877 retry.go:31] will retry after 515.194731ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:49.229678  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:49.229852  108877 provision.go:87] duration metric: took 3.219921943s to configureAuth
	W0919 22:55:49.229871  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:49.229885  108877 retry.go:31] will retry after 771.906µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:49.231039  108877 provision.go:84] configureAuth start
	I0919 22:55:49.231132  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:49.249933  108877 provision.go:143] copyHostCerts
	I0919 22:55:49.249972  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:49.250002  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:49.250011  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:49.250071  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:49.250213  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:49.250238  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:49.250245  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:49.250271  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:49.250344  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:49.250363  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:49.250378  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:49.250402  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:49.250471  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:49.448490  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:49.448554  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:49.448598  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:49.469591  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:49.505587  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:49.505623  108877 retry.go:31] will retry after 170.346142ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:49.713640  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:49.713675  108877 retry.go:31] will retry after 510.004107ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:50.260537  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:50.260571  108877 retry.go:31] will retry after 538.129291ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:50.835123  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:50.835210  108877 retry.go:31] will retry after 334.002809ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:51.169877  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:51.188990  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:51.226528  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:51.226556  108877 retry.go:31] will retry after 188.622401ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:51.451939  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:51.451970  108877 retry.go:31] will retry after 246.781671ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:51.738861  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:51.738913  108877 retry.go:31] will retry after 687.433161ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:52.463132  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:52.463228  108877 provision.go:87] duration metric: took 3.232167601s to configureAuth
	W0919 22:55:52.463242  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:52.463253  108877 retry.go:31] will retry after 1.470197ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:52.465465  108877 provision.go:84] configureAuth start
	I0919 22:55:52.465539  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:52.484373  108877 provision.go:143] copyHostCerts
	I0919 22:55:52.484410  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:52.484436  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:52.484445  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:52.484498  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:52.484585  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:52.484603  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:52.484607  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:52.484629  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:52.484686  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:52.484704  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:52.484708  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:52.484726  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:52.484789  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:52.776772  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:52.776836  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:52.776869  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:52.794899  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:52.833693  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:52.833739  108877 retry.go:31] will retry after 239.768811ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:53.110629  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:53.110665  108877 retry.go:31] will retry after 481.507936ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:53.629448  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:53.629481  108877 retry.go:31] will retry after 679.192834ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:54.344745  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:54.344825  108877 retry.go:31] will retry after 299.898432ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:54.645343  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:54.664630  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:54.700188  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:54.700227  108877 retry.go:31] will retry after 173.861141ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:54.910656  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:54.910700  108877 retry.go:31] will retry after 446.087955ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:55.394429  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:55.394463  108877 retry.go:31] will retry after 492.588436ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:55.925984  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:55.926132  108877 provision.go:87] duration metric: took 3.46064756s to configureAuth
	W0919 22:55:55.926146  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:55.926157  108877 retry.go:31] will retry after 1.103973ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:55.928314  108877 provision.go:84] configureAuth start
	I0919 22:55:55.928383  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:55.946349  108877 provision.go:143] copyHostCerts
	I0919 22:55:55.946384  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:55.946414  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:55.946423  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:55.946479  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:55.946566  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:55.946587  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:55.946594  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:55.946616  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:55.946677  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:55.946695  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:55.946698  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:55.946718  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:55.946783  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:55.989895  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:55.989952  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:55.989992  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:56.010643  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:56.046843  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:56.046874  108877 retry.go:31] will retry after 200.709085ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:56.284529  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:56.284563  108877 retry.go:31] will retry after 260.402259ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:56.584328  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:56.584356  108877 retry.go:31] will retry after 403.951779ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:57.027461  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:57.027496  108877 retry.go:31] will retry after 769.133652ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:57.834789  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:57.834897  108877 provision.go:87] duration metric: took 1.906563875s to configureAuth
	W0919 22:55:57.834928  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:57.834952  108877 retry.go:31] will retry after 2.547029ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:57.838182  108877 provision.go:84] configureAuth start
	I0919 22:55:57.838251  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:57.857889  108877 provision.go:143] copyHostCerts
	I0919 22:55:57.857938  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:57.857978  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:57.857992  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:57.858214  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:57.858453  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:57.858500  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:57.858507  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:57.858547  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:57.858631  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:57.858652  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:57.858656  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:57.858686  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:57.858755  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:57.923859  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:57.923932  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:57.923988  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:57.942482  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:57.978505  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:57.978531  108877 retry.go:31] will retry after 131.970521ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:58.146397  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:58.146425  108877 retry.go:31] will retry after 530.399158ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:58.712484  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:58.712511  108877 retry.go:31] will retry after 786.372545ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:59.534836  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:59.534922  108877 retry.go:31] will retry after 168.385695ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:59.704394  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:59.724227  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:59.760581  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:59.760612  108877 retry.go:31] will retry after 247.132588ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:00.044197  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:00.044224  108877 retry.go:31] will retry after 336.127105ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:00.416602  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:00.416636  108877 retry.go:31] will retry after 720.277952ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:01.173095  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:01.173217  108877 provision.go:87] duration metric: took 3.335013579s to configureAuth
	W0919 22:56:01.173229  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:01.173243  108877 retry.go:31] will retry after 2.798832ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:01.176494  108877 provision.go:84] configureAuth start
	I0919 22:56:01.176575  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:01.195250  108877 provision.go:143] copyHostCerts
	I0919 22:56:01.195293  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:01.195331  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:01.195367  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:01.195510  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:01.195659  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:01.195689  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:01.195701  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:01.195740  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:01.195833  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:01.195857  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:01.195864  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:01.195897  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:01.195988  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:01.859275  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:01.859345  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:01.859388  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:01.879176  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:01.914943  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:01.914970  108877 retry.go:31] will retry after 258.363429ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:02.210869  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:02.210991  108877 retry.go:31] will retry after 560.664787ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:02.808203  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:02.808239  108877 retry.go:31] will retry after 561.515443ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:03.405700  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:03.405799  108877 retry.go:31] will retry after 263.782493ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:03.670387  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:03.689156  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:03.724788  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:03.724820  108877 retry.go:31] will retry after 287.070084ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:04.048180  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:04.048218  108877 retry.go:31] will retry after 207.120232ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:04.291310  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:04.291346  108877 retry.go:31] will retry after 757.196129ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:05.086835  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:05.086959  108877 provision.go:87] duration metric: took 3.910440733s to configureAuth
	W0919 22:56:05.086974  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:05.086986  108877 retry.go:31] will retry after 5.223742ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:05.093247  108877 provision.go:84] configureAuth start
	I0919 22:56:05.093377  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:05.113825  108877 provision.go:143] copyHostCerts
	I0919 22:56:05.113865  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:05.113909  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:05.113915  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:05.113970  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:05.114424  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:05.115054  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:05.115087  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:05.115157  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:05.115268  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:05.115294  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:05.115300  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:05.115331  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:05.115412  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:05.404989  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:05.405045  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:05.405078  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:05.422957  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:05.459168  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:05.459197  108877 retry.go:31] will retry after 344.462045ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:05.841287  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:05.841328  108877 retry.go:31] will retry after 542.408002ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:06.419402  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:06.419431  108877 retry.go:31] will retry after 605.017904ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:07.062463  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:07.062547  108877 retry.go:31] will retry after 275.860303ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:07.339003  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:07.356567  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:07.391748  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:07.391780  108877 retry.go:31] will retry after 178.699792ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:07.607876  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:07.607911  108877 retry.go:31] will retry after 375.15091ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:08.018976  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:08.019003  108877 retry.go:31] will retry after 784.188181ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:08.839997  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:08.840145  108877 provision.go:87] duration metric: took 3.746870768s to configureAuth
	W0919 22:56:08.840159  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:08.840169  108877 retry.go:31] will retry after 6.861054ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:08.847426  108877 provision.go:84] configureAuth start
	I0919 22:56:08.847505  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:08.865433  108877 provision.go:143] copyHostCerts
	I0919 22:56:08.865480  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:08.865518  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:08.865527  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:08.865593  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:08.865688  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:08.865715  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:08.865723  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:08.865762  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:08.865831  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:08.865859  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:08.865867  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:08.865899  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:08.865974  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:09.225606  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:09.225675  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:09.225720  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:09.245000  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:09.283542  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:09.283582  108877 retry.go:31] will retry after 143.583579ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:09.463983  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:09.464011  108877 retry.go:31] will retry after 511.26629ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:10.011156  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:10.011188  108877 retry.go:31] will retry after 376.764816ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:10.424314  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:10.424349  108877 retry.go:31] will retry after 819.399589ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:11.279887  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:11.279970  108877 provision.go:87] duration metric: took 2.432521133s to configureAuth
	W0919 22:56:11.279984  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:11.279993  108877 retry.go:31] will retry after 12.318965ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:11.293297  108877 provision.go:84] configureAuth start
	I0919 22:56:11.293408  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:11.311440  108877 provision.go:143] copyHostCerts
	I0919 22:56:11.311481  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:11.311518  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:11.311531  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:11.311593  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:11.311690  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:11.311716  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:11.311727  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:11.311758  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:11.311821  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:11.311848  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:11.311857  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:11.311888  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:11.311956  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:11.580231  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:11.580306  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:11.580350  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:11.599414  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:11.635618  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:11.635650  108877 retry.go:31] will retry after 277.201613ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:11.949314  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:11.949341  108877 retry.go:31] will retry after 274.628798ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:12.261504  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:12.261533  108877 retry.go:31] will retry after 791.765374ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:13.092279  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:13.092350  108877 retry.go:31] will retry after 323.897677ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:13.416868  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:13.437301  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:13.474299  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:13.474337  108877 retry.go:31] will retry after 200.730433ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:13.711949  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:13.711988  108877 retry.go:31] will retry after 539.542496ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:14.289044  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:14.289078  108877 retry.go:31] will retry after 383.679218ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:14.710216  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:14.710308  108877 provision.go:87] duration metric: took 3.416985511s to configureAuth
	W0919 22:56:14.710319  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:14.710331  108877 retry.go:31] will retry after 19.04317ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:14.729514  108877 provision.go:84] configureAuth start
	I0919 22:56:14.729620  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:14.748043  108877 provision.go:143] copyHostCerts
	I0919 22:56:14.748082  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:14.748148  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:14.748161  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:14.748230  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:14.748328  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:14.748367  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:14.748378  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:14.748413  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:14.748479  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:14.748507  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:14.748517  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:14.748546  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:14.748617  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:15.109353  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:15.109409  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:15.109441  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:15.128026  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:15.164949  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:15.164987  108877 retry.go:31] will retry after 172.597249ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:15.374972  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:15.375000  108877 retry.go:31] will retry after 222.185257ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:15.633045  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:15.633082  108877 retry.go:31] will retry after 703.284522ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:16.372656  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:16.372734  108877 retry.go:31] will retry after 261.771317ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:16.635337  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:16.654949  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:16.690945  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:16.690979  108877 retry.go:31] will retry after 300.102808ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:17.027866  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:17.027899  108877 retry.go:31] will retry after 309.831037ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:17.376137  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:17.376168  108877 retry.go:31] will retry after 468.148418ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:17.880961  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:17.880988  108877 retry.go:31] will retry after 684.79805ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:18.603567  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:18.603671  108877 provision.go:87] duration metric: took 3.874130397s to configureAuth
	W0919 22:56:18.603685  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:18.603700  108877 retry.go:31] will retry after 42.064967ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:18.645896  108877 provision.go:84] configureAuth start
	I0919 22:56:18.646008  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:18.665460  108877 provision.go:143] copyHostCerts
	I0919 22:56:18.665495  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:18.665529  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:18.665539  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:18.665594  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:18.665668  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:18.665686  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:18.665693  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:18.665713  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:18.665754  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:18.665771  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:18.665777  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:18.665797  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:18.665844  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:19.242094  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:19.242156  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:19.242191  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:19.260155  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:19.296012  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:19.296038  108877 retry.go:31] will retry after 245.481119ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:19.578197  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:19.578231  108877 retry.go:31] will retry after 268.274354ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:19.882353  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:19.882415  108877 retry.go:31] will retry after 563.481155ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:20.482263  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:20.482363  108877 retry.go:31] will retry after 188.022762ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:20.670631  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:20.690671  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:20.726599  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:20.726629  108877 retry.go:31] will retry after 132.052233ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:20.894470  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:20.894501  108877 retry.go:31] will retry after 333.068816ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:21.263912  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:21.263937  108877 retry.go:31] will retry after 616.384688ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:21.917331  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:21.917427  108877 provision.go:87] duration metric: took 3.271503829s to configureAuth
	W0919 22:56:21.917439  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:21.917451  108877 retry.go:31] will retry after 63.141944ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:21.980683  108877 provision.go:84] configureAuth start
	I0919 22:56:21.980783  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:21.997490  108877 provision.go:143] copyHostCerts
	I0919 22:56:21.997546  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:21.997591  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:21.997601  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:21.997674  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:21.997779  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:21.997809  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:21.997816  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:21.997849  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:21.997918  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:21.997947  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:21.997956  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:21.997986  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:21.998059  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:22.147518  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:22.147575  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:22.147622  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:22.166129  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:22.203176  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:22.203206  108877 retry.go:31] will retry after 355.464116ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:22.595615  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:22.595643  108877 retry.go:31] will retry after 381.375504ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:23.013375  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:23.013405  108877 retry.go:31] will retry after 485.129276ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:23.533999  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:23.534064  108877 retry.go:31] will retry after 259.478636ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:23.794591  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:23.813276  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:23.848854  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:23.848883  108877 retry.go:31] will retry after 136.979108ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:24.022487  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:24.022517  108877 retry.go:31] will retry after 430.182854ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:24.489381  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:24.489421  108877 retry.go:31] will retry after 440.378545ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:24.966182  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:24.966213  108877 retry.go:31] will retry after 570.593495ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:25.572888  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:25.572980  108877 provision.go:87] duration metric: took 3.592258128s to configureAuth
	W0919 22:56:25.572991  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:25.573002  108877 retry.go:31] will retry after 80.275673ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:25.654286  108877 provision.go:84] configureAuth start
	I0919 22:56:25.654397  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:25.673356  108877 provision.go:143] copyHostCerts
	I0919 22:56:25.673394  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:25.673430  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:25.673441  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:25.673503  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:25.673583  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:25.673602  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:25.673609  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:25.673633  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:25.673708  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:25.673726  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:25.673732  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:25.673750  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:25.673798  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:25.978732  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:25.978789  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:25.978821  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:25.998793  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:26.035722  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:26.035752  108877 retry.go:31] will retry after 185.817603ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:26.258692  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:26.258726  108877 retry.go:31] will retry after 366.478539ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:26.662736  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:26.662770  108877 retry.go:31] will retry after 737.24048ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:27.436960  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:27.437068  108877 retry.go:31] will retry after 357.474232ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:27.794679  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:27.812988  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:27.848661  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:27.848697  108877 retry.go:31] will retry after 227.065335ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:28.113046  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:28.113086  108877 retry.go:31] will retry after 331.805613ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:28.482729  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:28.482755  108877 retry.go:31] will retry after 457.757799ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:28.977064  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:28.977208  108877 provision.go:87] duration metric: took 3.322888473s to configureAuth
	W0919 22:56:28.977225  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:28.977238  108877 retry.go:31] will retry after 82.927245ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:29.060500  108877 provision.go:84] configureAuth start
	I0919 22:56:29.060615  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:29.079194  108877 provision.go:143] copyHostCerts
	I0919 22:56:29.079237  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:29.079276  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:29.079288  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:29.079351  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:29.079454  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:29.079480  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:29.079488  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:29.079525  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:29.079599  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:29.079623  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:29.079631  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:29.079664  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:29.079736  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:29.134695  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:29.134761  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:29.134810  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:29.152678  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:29.188254  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:29.188282  108877 retry.go:31] will retry after 137.720284ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:29.363383  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:29.363416  108877 retry.go:31] will retry after 506.726285ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:29.908847  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:29.908880  108877 retry.go:31] will retry after 411.304777ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:30.355704  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:30.355793  108877 retry.go:31] will retry after 203.717987ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:30.560235  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:30.578622  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:30.616921  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:30.616952  108877 retry.go:31] will retry after 370.771171ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:31.025652  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:31.025682  108877 retry.go:31] will retry after 362.677663ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:31.426077  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:31.426132  108877 retry.go:31] will retry after 441.8947ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:31.904914  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:31.904994  108877 provision.go:87] duration metric: took 2.844469676s to configureAuth
	W0919 22:56:31.905001  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:31.905011  108877 retry.go:31] will retry after 102.648658ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:32.008362  108877 provision.go:84] configureAuth start
	I0919 22:56:32.008479  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:32.026977  108877 provision.go:143] copyHostCerts
	I0919 22:56:32.027012  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:32.027044  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:32.027054  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:32.027121  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:32.027216  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:32.027240  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:32.027244  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:32.027266  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:32.027319  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:32.027335  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:32.027339  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:32.027361  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:32.027437  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:32.395029  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:32.395089  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:32.395137  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:32.413735  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:32.449599  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:32.449631  108877 retry.go:31] will retry after 238.059442ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:32.724337  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:32.724367  108877 retry.go:31] will retry after 445.437522ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:33.205585  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:33.205623  108877 retry.go:31] will retry after 605.339888ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:33.847039  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:33.847151  108877 retry.go:31] will retry after 217.437844ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:34.065727  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:34.084461  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:34.121069  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:34.121144  108877 retry.go:31] will retry after 191.153871ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:34.347528  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:34.347617  108877 retry.go:31] will retry after 310.100528ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:34.694764  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:34.694791  108877 retry.go:31] will retry after 336.844738ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:35.068059  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:35.068095  108877 retry.go:31] will retry after 778.88836ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:35.885735  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:35.885829  108877 provision.go:87] duration metric: took 3.877417139s to configureAuth
	W0919 22:56:35.885839  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:35.885851  108877 retry.go:31] will retry after 310.258288ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:36.196298  108877 provision.go:84] configureAuth start
	I0919 22:56:36.196405  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:36.216801  108877 provision.go:143] copyHostCerts
	I0919 22:56:36.216840  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:36.216869  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:36.216878  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:36.216935  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:36.217042  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:36.217081  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:36.217086  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:36.217132  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:36.217198  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:36.217218  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:36.217225  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:36.217246  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:36.217299  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:36.911886  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:36.911947  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:36.911991  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:36.930148  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:36.965855  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:36.965887  108877 retry.go:31] will retry after 268.589558ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:37.271625  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:37.271657  108877 retry.go:31] will retry after 479.678948ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:37.788516  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:37.788543  108877 retry.go:31] will retry after 402.18824ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:38.227194  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:38.227284  108877 retry.go:31] will retry after 224.738673ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:38.452790  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:38.471469  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:38.507319  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:38.507351  108877 retry.go:31] will retry after 240.712716ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:38.784559  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:38.784596  108877 retry.go:31] will retry after 538.694984ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:39.360038  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:39.360067  108877 retry.go:31] will retry after 536.342982ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:39.932339  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:39.932422  108877 provision.go:87] duration metric: took 3.736097795s to configureAuth
	W0919 22:56:39.932430  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:39.932443  108877 retry.go:31] will retry after 206.453606ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:40.139916  108877 provision.go:84] configureAuth start
	I0919 22:56:40.140025  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:40.159279  108877 provision.go:143] copyHostCerts
	I0919 22:56:40.159324  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:40.159368  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:40.159381  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:40.159448  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:40.159547  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:40.159573  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:40.159581  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:40.159617  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:40.159717  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:40.159742  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:40.159750  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:40.159784  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:40.159858  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:40.276670  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:40.276739  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:40.276783  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:40.297504  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:40.334785  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:40.334822  108877 retry.go:31] will retry after 328.004509ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:40.701136  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:40.701168  108877 retry.go:31] will retry after 413.032497ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:41.151037  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:41.151097  108877 retry.go:31] will retry after 823.289324ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:42.010820  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:42.010916  108877 provision.go:87] duration metric: took 1.870966844s to configureAuth
	W0919 22:56:42.010931  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:42.010950  108877 retry.go:31] will retry after 488.057311ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:42.499593  108877 provision.go:84] configureAuth start
	I0919 22:56:42.499692  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:42.517980  108877 provision.go:143] copyHostCerts
	I0919 22:56:42.518015  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:42.518052  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:42.518058  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:42.518129  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:42.518224  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:42.518244  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:42.518249  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:42.518271  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:42.518325  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:42.518342  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:42.518345  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:42.518366  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:42.518417  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:42.823337  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:42.823395  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:42.823438  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:42.841811  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:42.877778  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:42.877819  108877 retry.go:31] will retry after 298.649157ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:43.212922  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:43.212958  108877 retry.go:31] will retry after 522.015069ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:43.771555  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:43.771589  108877 retry.go:31] will retry after 664.326257ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:44.472134  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:44.472221  108877 retry.go:31] will retry after 153.745574ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:44.626669  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:44.645720  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:44.681791  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:44.681823  108877 retry.go:31] will retry after 365.465122ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:45.084885  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:45.084914  108877 retry.go:31] will retry after 466.75968ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:45.589343  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:45.589390  108877 retry.go:31] will retry after 488.601857ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:46.115089  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:46.115225  108877 provision.go:87] duration metric: took 3.615609417s to configureAuth
	W0919 22:56:46.115233  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:46.115249  108877 retry.go:31] will retry after 754.938625ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:46.871274  108877 provision.go:84] configureAuth start
	I0919 22:56:46.871388  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:46.889941  108877 provision.go:143] copyHostCerts
	I0919 22:56:46.889990  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:46.890037  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:46.890050  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:46.890160  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:46.890269  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:46.890296  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:46.890304  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:46.890360  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:46.890434  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:46.890459  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:46.890469  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:46.890499  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:46.890572  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:46.997796  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:46.997867  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:46.997912  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:47.017254  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:47.054744  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:47.054778  108877 retry.go:31] will retry after 308.508878ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:47.400043  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:47.400080  108877 retry.go:31] will retry after 493.608013ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:47.930962  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:47.930992  108877 retry.go:31] will retry after 488.73635ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:48.456395  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:48.456470  108877 retry.go:31] will retry after 197.32939ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:48.654934  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:48.674211  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:48.710143  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:48.710175  108877 retry.go:31] will retry after 134.018657ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:48.879983  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:48.880019  108877 retry.go:31] will retry after 327.178794ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:49.243596  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:49.243627  108877 retry.go:31] will retry after 696.883564ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:49.978365  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:49.978446  108877 provision.go:87] duration metric: took 3.10712947s to configureAuth
	W0919 22:56:49.978452  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:49.978461  108877 retry.go:31] will retry after 1.108872523s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:51.087560  108877 provision.go:84] configureAuth start
	I0919 22:56:51.087642  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:51.106657  108877 provision.go:143] copyHostCerts
	I0919 22:56:51.106704  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:51.106742  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:51.106755  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:51.106824  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:51.106932  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:51.106959  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:51.106965  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:51.106999  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:51.107073  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:51.107116  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:51.107123  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:51.107158  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:51.107241  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:51.139574  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:51.139642  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:51.139689  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:51.158649  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:51.195066  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:51.195097  108877 retry.go:31] will retry after 362.143833ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:51.594416  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:51.594447  108877 retry.go:31] will retry after 303.523109ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:51.934745  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:51.934770  108877 retry.go:31] will retry after 543.851524ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:52.515882  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:52.515974  108877 retry.go:31] will retry after 322.599797ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:52.839665  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:52.861040  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:52.897445  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:52.897480  108877 retry.go:31] will retry after 148.171313ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:53.082549  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:53.082578  108877 retry.go:31] will retry after 259.258531ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:53.377992  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:53.378028  108877 retry.go:31] will retry after 736.784844ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:54.152006  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:54.152129  108877 provision.go:87] duration metric: took 3.064543662s to configureAuth
	W0919 22:56:54.152144  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:54.152162  108877 retry.go:31] will retry after 2.515449118s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:56.669831  108877 provision.go:84] configureAuth start
	I0919 22:56:56.670043  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:56.688740  108877 provision.go:143] copyHostCerts
	I0919 22:56:56.688785  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:56.688823  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:56.688836  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:56.688903  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:56.689008  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:56.689034  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:56.689038  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:56.689070  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:56.689192  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:56.689224  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:56.689237  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:56.689269  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:56.689352  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:57.015996  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:57.016051  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:57.016137  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:57.034711  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:57.070767  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:57.070801  108877 retry.go:31] will retry after 268.964622ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:57.376240  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:57.376285  108877 retry.go:31] will retry after 515.618696ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:57.928822  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:57.928857  108877 retry.go:31] will retry after 709.3811ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:58.674783  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:58.674856  108877 retry.go:31] will retry after 326.321162ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:59.001369  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:59.019209  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:59.055625  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:59.055653  108877 retry.go:31] will retry after 129.805557ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:59.222051  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:59.222084  108877 retry.go:31] will retry after 547.397983ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:59.805545  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:59.805581  108877 retry.go:31] will retry after 688.131924ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:00.530240  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:00.530347  108877 provision.go:87] duration metric: took 3.860436584s to configureAuth
	W0919 22:57:00.530368  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:00.530382  108877 retry.go:31] will retry after 3.473490773s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:04.005067  108877 provision.go:84] configureAuth start
	I0919 22:57:04.005190  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:57:04.022589  108877 provision.go:143] copyHostCerts
	I0919 22:57:04.022626  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:04.022653  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:57:04.022659  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:04.022725  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:57:04.022798  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:04.022819  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:57:04.022824  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:04.022844  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:57:04.022887  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:04.022903  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:57:04.022908  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:04.022926  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:57:04.022998  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:57:04.433055  108877 provision.go:177] copyRemoteCerts
	I0919 22:57:04.433134  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:57:04.433169  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:04.452162  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:04.487790  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:04.487816  108877 retry.go:31] will retry after 301.604842ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:04.826348  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:04.826384  108877 retry.go:31] will retry after 320.796627ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:05.183582  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:05.183617  108877 retry.go:31] will retry after 607.690423ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:05.826718  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:05.826781  108877 retry.go:31] will retry after 374.651417ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:06.202474  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:06.220929  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:06.258097  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:06.258150  108877 retry.go:31] will retry after 183.921318ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:06.478404  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:06.478436  108877 retry.go:31] will retry after 368.414927ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:06.883316  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:06.883350  108877 retry.go:31] will retry after 514.052172ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:07.434181  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:07.434210  108877 retry.go:31] will retry after 595.491046ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:08.065650  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:08.065740  108877 provision.go:87] duration metric: took 4.060647903s to configureAuth
	W0919 22:57:08.065753  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:08.065765  108877 retry.go:31] will retry after 2.793620534s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:10.859931  108877 provision.go:84] configureAuth start
	I0919 22:57:10.860020  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:57:10.877832  108877 provision.go:143] copyHostCerts
	I0919 22:57:10.877873  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:10.877909  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:57:10.877923  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:10.877991  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:57:10.878141  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:10.878173  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:57:10.878181  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:10.878215  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:57:10.878285  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:10.878311  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:57:10.878321  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:10.878351  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:57:10.878423  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:57:10.984390  108877 provision.go:177] copyRemoteCerts
	I0919 22:57:10.984447  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:57:10.984480  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:11.003216  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:11.038380  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:11.038425  108877 retry.go:31] will retry after 370.890016ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:11.445998  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:11.446033  108877 retry.go:31] will retry after 188.555467ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:11.671096  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:11.671146  108877 retry.go:31] will retry after 817.050629ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:12.525157  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:12.525243  108877 retry.go:31] will retry after 306.251712ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:12.831810  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:12.849689  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:12.885775  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:12.885803  108877 retry.go:31] will retry after 132.37261ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:13.055528  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:13.055563  108877 retry.go:31] will retry after 238.491118ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:13.330205  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:13.330240  108877 retry.go:31] will retry after 464.873837ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:13.831628  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:13.831673  108877 retry.go:31] will retry after 494.104964ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:14.362527  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:14.362621  108877 provision.go:87] duration metric: took 3.502663397s to configureAuth
	W0919 22:57:14.362636  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:14.362646  108877 retry.go:31] will retry after 3.171081362s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:17.533852  108877 provision.go:84] configureAuth start
	I0919 22:57:17.533970  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:57:17.553677  108877 provision.go:143] copyHostCerts
	I0919 22:57:17.553714  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:17.553749  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:57:17.553761  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:17.553840  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:57:17.553935  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:17.553961  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:57:17.553968  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:17.553998  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:57:17.554058  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:17.554084  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:57:17.554090  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:17.554163  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:57:17.554245  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:57:17.842271  108877 provision.go:177] copyRemoteCerts
	I0919 22:57:17.842335  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:57:17.842369  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:17.860493  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:17.896364  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:17.896395  108877 retry.go:31] will retry after 245.526695ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:18.178923  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:18.178957  108877 retry.go:31] will retry after 291.474893ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:18.506844  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:18.506893  108877 retry.go:31] will retry after 428.15725ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:18.971538  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:18.971609  108877 retry.go:31] will retry after 328.173688ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:19.300150  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:19.318702  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:19.355566  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:19.355602  108877 retry.go:31] will retry after 195.443544ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:19.588029  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:19.588064  108877 retry.go:31] will retry after 197.002623ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:19.820782  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:19.820815  108877 retry.go:31] will retry after 306.66473ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:20.163931  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:20.164025  108877 provision.go:87] duration metric: took 2.630147192s to configureAuth
	W0919 22:57:20.164039  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:20.164057  108877 retry.go:31] will retry after 5.88081309s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:26.047184  108877 provision.go:84] configureAuth start
	I0919 22:57:26.047287  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:57:26.066549  108877 provision.go:143] copyHostCerts
	I0919 22:57:26.066588  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:26.066631  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:57:26.066646  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:26.066714  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:57:26.066812  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:26.066839  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:57:26.066851  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:26.066885  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:57:26.066949  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:26.066974  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:57:26.066984  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:26.067013  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:57:26.067083  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:57:26.430292  108877 provision.go:177] copyRemoteCerts
	I0919 22:57:26.430358  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:57:26.430413  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:26.448874  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:26.485062  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:26.485093  108877 retry.go:31] will retry after 343.157141ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:26.863852  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:26.863899  108877 retry.go:31] will retry after 287.302046ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:27.186803  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:27.186834  108877 retry.go:31] will retry after 756.208988ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:27.979672  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:27.979754  108877 retry.go:31] will retry after 357.114937ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:28.337288  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:28.359209  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:28.395795  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:28.395827  108877 retry.go:31] will retry after 334.191783ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:28.765402  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:28.765435  108877 retry.go:31] will retry after 479.582515ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:29.282486  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:29.282516  108877 retry.go:31] will retry after 731.889055ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:30.052091  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.052209  108877 provision.go:87] duration metric: took 4.00499904s to configureAuth
	W0919 22:57:30.052219  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.052233  108877 ubuntu.go:202] Error configuring auth during provisioning Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.052243  108877 machine.go:96] duration metric: took 11m1.1194403s to provisionDockerMachine
	I0919 22:57:30.052319  108877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:57:30.052364  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:30.072494  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:30.108866  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.108893  108877 retry.go:31] will retry after 233.851556ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:30.378888  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.378916  108877 retry.go:31] will retry after 336.456758ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:30.752888  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.752921  108877 retry.go:31] will retry after 321.92269ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:31.112464  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:31.112493  108877 retry.go:31] will retry after 649.982973ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:31.801129  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:31.801197  108877 retry.go:31] will retry after 218.292036ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:32.020708  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:32.039859  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:32.075888  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:32.075943  108877 retry.go:31] will retry after 192.036574ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:32.306777  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:32.306815  108877 retry.go:31] will retry after 210.414159ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:32.556133  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:32.556165  108877 retry.go:31] will retry after 739.62039ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:33.331746  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:33.331819  108877 start.go:268] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:33.331833  108877 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:33.331892  108877 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:57:33.331942  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:33.350393  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:33.386406  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:33.386434  108877 retry.go:31] will retry after 349.776959ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:33.772275  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:33.772308  108877 retry.go:31] will retry after 325.543128ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:34.135049  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:34.135160  108877 retry.go:31] will retry after 409.049881ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:34.579989  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:34.580036  108877 retry.go:31] will retry after 621.130338ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:35.237720  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:35.237802  108877 start.go:283] error running df -BG /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:35.237833  108877 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:35.237839  108877 fix.go:56] duration metric: took 11m6.646308817s for fixHost
	I0919 22:57:35.237846  108877 start.go:83] releasing machines lock for "ha-984158-m04", held for 11m6.646337997s
	W0919 22:57:35.237863  108877 start.go:714] error starting host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:35.237942  108877 out.go:285] ! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:35.237965  108877 start.go:729] Will try again in 5 seconds ...
	I0919 22:57:40.239023  108877 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:57:40.239172  108877 start.go:364] duration metric: took 81.107µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:57:40.239194  108877 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:57:40.239201  108877 fix.go:54] fixHost starting: m04
	I0919 22:57:40.239431  108877 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:57:40.257713  108877 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Running err=<nil>
	W0919 22:57:40.257736  108877 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:57:40.259573  108877 out.go:252] * Updating the running docker "ha-984158-m04" container ...
	I0919 22:57:40.259646  108877 machine.go:93] provisionDockerMachine start ...
	I0919 22:57:40.259712  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:40.278585  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:57:40.278817  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:57:40.278833  108877 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:57:40.315069  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:43.351146  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:46.388339  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:49.426746  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:52.463707  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:55.500573  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:58.538182  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:01.575927  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:04.616375  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:07.653326  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:10.690229  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:13.728885  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:16.768560  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:19.806622  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:22.842755  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:25.881701  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:28.917980  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:31.955190  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:34.992919  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:38.030446  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:41.067474  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:44.104421  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:47.142056  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:50.180294  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:53.217514  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:56.255024  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:59.292319  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:02.329219  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:05.366989  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:08.402945  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:11.439816  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:14.476386  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:17.513513  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:20.549641  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:23.586144  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:26.623276  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:29.660785  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:32.697636  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:35.735863  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:38.774479  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:41.811818  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:44.850018  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:47.887261  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:50.924246  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:53.961078  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:56.999866  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:00.037067  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:03.074676  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:06.113750  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:09.151270  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:12.189380  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:15.227164  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:18.263925  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:21.301513  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:24.339191  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:27.375639  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:30.410883  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:33.448495  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:36.487617  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:39.525454  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:42.525653  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:00:42.525702  108877 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 23:00:42.525804  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 23:00:42.546781  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 23:00:42.547011  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 23:00:42.547024  108877 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m04 && echo "ha-984158-m04" | sudo tee /etc/hostname
	I0919 23:00:42.582767  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:45.622025  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:48.658598  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:51.696578  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:54.735790  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:57.772254  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:00.809145  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:03.847360  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:06.886611  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:09.924681  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:12.962276  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:16.000899  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:19.036953  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:22.074167  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:25.113341  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:28.150651  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:31.187163  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:34.225742  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:37.261917  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:40.297809  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:43.333952  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:46.372525  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:49.410324  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:52.446487  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:55.484663  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:58.522655  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:01.563288  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:04.604701  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:07.641452  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:10.678188  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:13.715164  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:16.755096  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:19.793467  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:22.831053  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:25.869043  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:28.905456  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:31.942385  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:34.980828  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:38.019484  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:41.055921  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:44.092932  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:47.133154  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:50.170708  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:53.207283  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:56.245651  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:59.283000  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:03:02.320057  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:03:05.356723  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:03:08.393190  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:03:11.429671  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:03:14.469188  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:03:17.505418  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain

                                                
                                                
** /stderr **
ha_test.go:564: failed to start cluster. args "out/minikube-linux-amd64 -p ha-984158 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio" : signal: killed
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect ha-984158
helpers_test.go:243: (dbg) docker inspect ha-984158:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	        "Created": "2025-09-19T22:33:24.996172492Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 109071,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T22:46:12.450409445Z",
	            "FinishedAt": "2025-09-19T22:46:11.671268254Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hostname",
	        "HostsPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/hosts",
	        "LogPath": "/var/lib/docker/containers/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca/0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca-json.log",
	        "Name": "/ha-984158",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "ha-984158:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "ha-984158",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "0e7c4b5cff2aa6cdcb2403afad569ec0f7704c999522d6f156cdb773d2857cca",
	                "LowerDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf8312299ea240c21ae5004f49342ac196796ec1ccaa21c3720e1d884bad5886/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "ha-984158",
	                "Source": "/var/lib/docker/volumes/ha-984158/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "ha-984158",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "ha-984158",
	                "name.minikube.sigs.k8s.io": "ha-984158",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "576b3cdcd2c7d690a213e6dbf0192a47c0acc20b5ec550dd63063617c76d89a7",
	            "SandboxKey": "/var/run/docker/netns/576b3cdcd2c7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32838"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32839"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32842"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32840"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32841"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "ha-984158": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "e6:65:90:7e:ed:27",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8b1b6c79ac61dbabfd8f1ce8959ab9a2616212ddaf4680b1bb2cc7b6f6005d0e",
	                    "EndpointID": "eafeb194d3f3da2871c1d356ee7ed384472f41eaf2fbb93251198fcc199da965",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "ha-984158",
	                        "0e7c4b5cff2a"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-984158 -n ha-984158
helpers_test.go:252: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 logs -n 25: (1.267592668s)
helpers_test.go:260: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │  PROFILE  │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ cp      │ ha-984158 cp ha-984158-m03:/home/docker/cp-test.txt ha-984158-m04:/home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test_ha-984158-m03_ha-984158-m04.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp testdata/cp-test.txt ha-984158-m04:/home/docker/cp-test.txt                                                             │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile4013665013/001/cp-test_ha-984158-m04.txt │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158:/home/docker/cp-test_ha-984158-m04_ha-984158.txt                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158.txt                                                 │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m02:/home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m02 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m02.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ cp      │ ha-984158 cp ha-984158-m04:/home/docker/cp-test.txt ha-984158-m03:/home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt               │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m04 sudo cat /home/docker/cp-test.txt                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ ssh     │ ha-984158 ssh -n ha-984158-m03 sudo cat /home/docker/cp-test_ha-984158-m04_ha-984158-m03.txt                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │                     │
	│ node    │ ha-984158 node stop m02 --alsologtostderr -v 5                                                                                       │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ node    │ ha-984158 node start m02 --alsologtostderr -v 5                                                                                      │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:36 UTC │ 19 Sep 25 22:36 UTC │
	│ node    │ ha-984158 node list --alsologtostderr -v 5                                                                                           │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │                     │
	│ stop    │ ha-984158 stop --alsologtostderr -v 5                                                                                                │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:37 UTC │ 19 Sep 25 22:38 UTC │
	│ start   │ ha-984158 start --wait true --alsologtostderr -v 5                                                                                   │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:38 UTC │                     │
	│ node    │ ha-984158 node list --alsologtostderr -v 5                                                                                           │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:45 UTC │                     │
	│ node    │ ha-984158 node delete m03 --alsologtostderr -v 5                                                                                     │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:45 UTC │ 19 Sep 25 22:45 UTC │
	│ stop    │ ha-984158 stop --alsologtostderr -v 5                                                                                                │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:45 UTC │ 19 Sep 25 22:46 UTC │
	│ start   │ ha-984158 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio                                         │ ha-984158 │ jenkins │ v1.37.0 │ 19 Sep 25 22:46 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:46:12
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:46:12.216361  108877 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:46:12.216654  108877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:46:12.216665  108877 out.go:374] Setting ErrFile to fd 2...
	I0919 22:46:12.216669  108877 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:46:12.216929  108877 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:46:12.217473  108877 out.go:368] Setting JSON to false
	I0919 22:46:12.218412  108877 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":5322,"bootTime":1758316650,"procs":187,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:46:12.218505  108877 start.go:140] virtualization: kvm guest
	I0919 22:46:12.220990  108877 out.go:179] * [ha-984158] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:46:12.222652  108877 notify.go:220] Checking for updates...
	I0919 22:46:12.222716  108877 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:46:12.224405  108877 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:46:12.226356  108877 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:46:12.227945  108877 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:46:12.231398  108877 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:46:12.233378  108877 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:46:12.235393  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:12.235929  108877 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:46:12.259440  108877 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:46:12.259601  108877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:46:12.315152  108877 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:46:12.305215381 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:46:12.315257  108877 docker.go:318] overlay module found
	I0919 22:46:12.317207  108877 out.go:179] * Using the docker driver based on existing profile
	I0919 22:46:12.318613  108877 start.go:304] selected driver: docker
	I0919 22:46:12.318631  108877 start.go:918] validating driver "docker" against &{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNam
es:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false ku
bevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:46:12.318764  108877 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:46:12.318866  108877 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:46:12.375932  108877 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:0 ContainersPaused:0 ContainersStopped:3 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:false NGoroutines:45 SystemTime:2025-09-19 22:46:12.36611658 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:46:12.376654  108877 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:46:12.376683  108877 cni.go:84] Creating CNI manager for ""
	I0919 22:46:12.376742  108877 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:46:12.376800  108877 start.go:348] cluster config:
	{Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: N
etworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-devic
e-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:46:12.378603  108877 out.go:179] * Starting "ha-984158" primary control-plane node in "ha-984158" cluster
	I0919 22:46:12.380006  108877 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:46:12.381572  108877 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:46:12.382857  108877 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:46:12.382906  108877 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:46:12.382923  108877 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:46:12.382936  108877 cache.go:58] Caching tarball of preloaded images
	I0919 22:46:12.383039  108877 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:46:12.383055  108877 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:46:12.383212  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:12.403326  108877 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:46:12.403345  108877 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:46:12.403361  108877 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:46:12.403384  108877 start.go:360] acquireMachinesLock for ha-984158: {Name:mkc72a6d4fef468a73a10e88f019b77c34dadd97 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:46:12.403455  108877 start.go:364] duration metric: took 45.824µs to acquireMachinesLock for "ha-984158"
	I0919 22:46:12.403473  108877 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:46:12.403482  108877 fix.go:54] fixHost starting: 
	I0919 22:46:12.403690  108877 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:46:12.421194  108877 fix.go:112] recreateIfNeeded on ha-984158: state=Stopped err=<nil>
	W0919 22:46:12.421238  108877 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:46:12.423604  108877 out.go:252] * Restarting existing docker container for "ha-984158" ...
	I0919 22:46:12.423684  108877 cli_runner.go:164] Run: docker start ha-984158
	I0919 22:46:12.673870  108877 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:46:12.696836  108877 kic.go:430] container "ha-984158" state is running.
	I0919 22:46:12.697260  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:46:12.718695  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:12.718941  108877 machine.go:93] provisionDockerMachine start ...
	I0919 22:46:12.719002  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:12.741823  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:12.742061  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:46:12.742077  108877 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:46:12.742802  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51412->127.0.0.1:32838: read: connection reset by peer
	I0919 22:46:15.881704  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:46:15.881745  108877 ubuntu.go:182] provisioning hostname "ha-984158"
	I0919 22:46:15.881804  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:15.901150  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:15.901417  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:46:15.901437  108877 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158 && echo "ha-984158" | sudo tee /etc/hostname
	I0919 22:46:16.050888  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158
	
	I0919 22:46:16.050963  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:16.068615  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:16.068892  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:46:16.068914  108877 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:46:16.208904  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:46:16.208934  108877 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:46:16.208956  108877 ubuntu.go:190] setting up certificates
	I0919 22:46:16.208967  108877 provision.go:84] configureAuth start
	I0919 22:46:16.209031  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:46:16.227718  108877 provision.go:143] copyHostCerts
	I0919 22:46:16.227763  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:46:16.227792  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:46:16.227811  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:46:16.227885  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:46:16.227987  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:46:16.228007  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:46:16.228013  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:46:16.228040  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:46:16.228150  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:46:16.228172  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:46:16.228179  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:46:16.228209  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:46:16.228337  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158 san=[127.0.0.1 192.168.49.2 ha-984158 localhost minikube]
	I0919 22:46:16.573002  108877 provision.go:177] copyRemoteCerts
	I0919 22:46:16.573064  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:46:16.573113  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:16.592217  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:16.690168  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:46:16.690236  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:46:16.715223  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:46:16.715291  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0919 22:46:16.740942  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:46:16.741005  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:46:16.766354  108877 provision.go:87] duration metric: took 557.37452ms to configureAuth
	I0919 22:46:16.766382  108877 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:46:16.766610  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:16.766705  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:16.786657  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:16.786955  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32838 <nil> <nil>}
	I0919 22:46:16.786980  108877 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:46:17.096636  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:46:17.096672  108877 machine.go:96] duration metric: took 4.377714802s to provisionDockerMachine
	I0919 22:46:17.096688  108877 start.go:293] postStartSetup for "ha-984158" (driver="docker")
	I0919 22:46:17.096701  108877 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:46:17.096770  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:46:17.096823  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:17.119671  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:17.218230  108877 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:46:17.221650  108877 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:46:17.221677  108877 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:46:17.221684  108877 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:46:17.221690  108877 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:46:17.221700  108877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:46:17.221764  108877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:46:17.221848  108877 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:46:17.221859  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:46:17.221941  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:46:17.231608  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:46:17.256965  108877 start.go:296] duration metric: took 160.262267ms for postStartSetup
	I0919 22:46:17.257080  108877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:46:17.257142  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:17.275475  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:17.368260  108877 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:46:17.372717  108877 fix.go:56] duration metric: took 4.969233422s for fixHost
	I0919 22:46:17.372745  108877 start.go:83] releasing machines lock for "ha-984158", held for 4.969278s
	I0919 22:46:17.372815  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158
	I0919 22:46:17.390438  108877 ssh_runner.go:195] Run: cat /version.json
	I0919 22:46:17.390483  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:17.390536  108877 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:46:17.390601  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:17.410661  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:17.410957  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:17.578439  108877 ssh_runner.go:195] Run: systemctl --version
	I0919 22:46:17.583306  108877 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:46:17.724560  108877 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:46:17.729340  108877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:46:17.738652  108877 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:46:17.738736  108877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:46:17.748613  108877 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:46:17.748636  108877 start.go:495] detecting cgroup driver to use...
	I0919 22:46:17.748665  108877 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:46:17.748708  108877 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:46:17.761846  108877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:46:17.774159  108877 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:46:17.774220  108877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:46:17.786916  108877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:46:17.799471  108877 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:46:17.862027  108877 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:46:17.932767  108877 docker.go:234] disabling docker service ...
	I0919 22:46:17.932824  108877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:46:17.946036  108877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:46:17.958434  108877 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:46:18.026742  108877 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:46:18.092388  108877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:46:18.104517  108877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:46:18.122118  108877 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:46:18.122187  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.133296  108877 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:46:18.133358  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.144273  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.154713  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.165450  108877 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:46:18.175471  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.186448  108877 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.196793  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:18.207323  108877 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:46:18.216504  108877 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:46:18.226278  108877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:46:18.292582  108877 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:46:18.395143  108877 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:46:18.395208  108877 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:46:18.399260  108877 start.go:563] Will wait 60s for crictl version
	I0919 22:46:18.399345  108877 ssh_runner.go:195] Run: which crictl
	I0919 22:46:18.403306  108877 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:46:18.439273  108877 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:46:18.439358  108877 ssh_runner.go:195] Run: crio --version
	I0919 22:46:18.477736  108877 ssh_runner.go:195] Run: crio --version
	I0919 22:46:18.517625  108877 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:46:18.519401  108877 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:46:18.538950  108877 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:46:18.543029  108877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:46:18.555164  108877 kubeadm.go:875] updating cluster {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIP
s:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false
logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 22:46:18.555281  108877 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:46:18.555321  108877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:46:18.602120  108877 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:46:18.602145  108877 crio.go:433] Images already preloaded, skipping extraction
	I0919 22:46:18.602190  108877 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 22:46:18.638063  108877 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 22:46:18.638085  108877 cache_images.go:85] Images are preloaded, skipping loading
	I0919 22:46:18.638096  108877 kubeadm.go:926] updating node { 192.168.49.2 8443 v1.34.0 crio true true} ...
	I0919 22:46:18.638217  108877 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:46:18.638289  108877 ssh_runner.go:195] Run: crio config
	I0919 22:46:18.682755  108877 cni.go:84] Creating CNI manager for ""
	I0919 22:46:18.682776  108877 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0919 22:46:18.682785  108877 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 22:46:18.682804  108877 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-984158 NodeName:ha-984158 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 22:46:18.682949  108877 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-984158"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 22:46:18.682971  108877 kube-vip.go:115] generating kube-vip config ...
	I0919 22:46:18.683023  108877 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:46:18.695680  108877 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:46:18.695771  108877 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:46:18.695831  108877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:46:18.704995  108877 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:46:18.705090  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0919 22:46:18.714229  108877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (359 bytes)
	I0919 22:46:18.732876  108877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:46:18.751654  108877 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2205 bytes)
	I0919 22:46:18.771660  108877 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:46:18.791347  108877 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:46:18.795300  108877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:46:18.807294  108877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:46:18.870326  108877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:46:18.890598  108877 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.2
	I0919 22:46:18.890622  108877 certs.go:194] generating shared ca certs ...
	I0919 22:46:18.890642  108877 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:18.890820  108877 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:46:18.890875  108877 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:46:18.890884  108877 certs.go:256] generating profile certs ...
	I0919 22:46:18.890988  108877 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:46:18.891026  108877 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.cd8db51d
	I0919 22:46:18.891041  108877 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.cd8db51d with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2 192.168.49.3 192.168.49.254]
	I0919 22:46:19.605865  108877 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.cd8db51d ...
	I0919 22:46:19.605953  108877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.cd8db51d: {Name:mk7f25dd3beb69a2627b32c86fa05a4a9f1ad6c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:19.606168  108877 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.cd8db51d ...
	I0919 22:46:19.606186  108877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.cd8db51d: {Name:mk8f6bf1f9253215ea3b4b09434f0ad297843936 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:19.606312  108877 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt.cd8db51d -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt
	I0919 22:46:19.606498  108877 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.cd8db51d -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key
	I0919 22:46:19.606699  108877 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:46:19.606716  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:46:19.606735  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:46:19.606749  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:46:19.606766  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:46:19.606780  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:46:19.606794  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:46:19.606807  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:46:19.606821  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:46:19.606887  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:46:19.606926  108877 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:46:19.606936  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:46:19.606966  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:46:19.606994  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:46:19.607023  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:46:19.607083  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:46:19.607136  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:19.607156  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:46:19.607172  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:46:19.608038  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:46:19.647257  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:46:19.680445  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:46:19.707341  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:46:19.732949  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:46:19.759195  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:46:19.784760  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:46:19.811628  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:46:19.838352  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:46:19.864825  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:46:19.890278  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:46:19.916079  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 22:46:19.935550  108877 ssh_runner.go:195] Run: openssl version
	I0919 22:46:19.941303  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:46:19.951377  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:46:19.955360  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:46:19.955418  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:46:19.962652  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:46:19.972203  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:46:19.984692  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:46:19.989793  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:46:19.989856  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:46:19.997255  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:46:20.007365  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:46:20.018217  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:20.022308  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:20.022372  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:20.029407  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:46:20.039229  108877 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:46:20.043177  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:46:20.050319  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:46:20.057484  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:46:20.064329  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:46:20.071249  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:46:20.078085  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:46:20.084889  108877 kubeadm.go:392] StartCluster: {Name:ha-984158 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[
] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.49.5 Port:0 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false log
viewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAu
thSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:46:20.085014  108877 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 22:46:20.085084  108877 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 22:46:20.126875  108877 cri.go:89] found id: "965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461"
	I0919 22:46:20.126897  108877 cri.go:89] found id: "59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce"
	I0919 22:46:20.126904  108877 cri.go:89] found id: "e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6"
	I0919 22:46:20.126908  108877 cri.go:89] found id: "28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0"
	I0919 22:46:20.126913  108877 cri.go:89] found id: "8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2"
	I0919 22:46:20.126919  108877 cri.go:89] found id: ""
	I0919 22:46:20.126969  108877 ssh_runner.go:195] Run: sudo runc list -f json
	I0919 22:46:20.152531  108877 cri.go:116] JSON = [{"ociVersion":"1.0.2-dev","id":"28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0","pid":821,"status":"running","bundle":"/run/containers/storage/overlay-containers/28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0/userdata","rootfs":"/var/lib/containers/storage/overlay/cc7fd9c1671034c7ec28c804e89098f3430de08294ed80b7199c664a5f72ba8e/merged","created":"2025-09-19T22:46:19.545218383Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d64ad60b","io.kubernetes.container.name":"kube-vip","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d64ad60b\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMes
sagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:46:19.459120421Z","io.kubernetes.cri-o.Image":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.ImageName":"ghcr.io/kube-vip/kube-vip:v1.0.0","io.kubernetes.cri-o.ImageRef":"765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-vip\",\"io.kubernetes.pod.name\":\"kube-vip-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17c8e4bb866faa0106347d8b7bccd341\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-vip-ha-984158_17c8e4bb866faa0106347d8b7bccd341/kube-vip/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-vip\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/l
ib/containers/storage/overlay/cc7fd9c1671034c7ec28c804e89098f3430de08294ed80b7199c664a5f72ba8e/merged","io.kubernetes.cri-o.Name":"k8s_kube-vip_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/d6df5b205fc00249d0e9590a985ea3a627fb8001b0cb30fb23590ca88bed9d95/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"d6df5b205fc00249d0e9590a985ea3a627fb8001b0cb30fb23590ca88bed9d95","io.kubernetes.cri-o.SandboxName":"k8s_kube-vip-ha-984158_kube-system_17c8e4bb866faa0106347d8b7bccd341_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/17c8e4bb866faa0106347d8b7bccd341/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubele
t/pods/17c8e4bb866faa0106347d8b7bccd341/containers/kube-vip/0984bd68\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/admin.conf\",\"host_path\":\"/etc/kubernetes/admin.conf\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-vip-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.hash":"17c8e4bb866faa0106347d8b7bccd341","kubernetes.io/config.seen":"2025-09-19T22:46:18.961795051Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce","pid":836,"status":"running","bundle":"/r
un/containers/storage/overlay-containers/59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce/userdata","rootfs":"/var/lib/containers/storage/overlay/b7769c0dc387db2817a2192fae2ca0b5ab06b67506fd54e81eac8724cada8d35/merged","created":"2025-09-19T22:46:19.553698427Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"85eae708","io.kubernetes.container.name":"kube-scheduler","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10259,\"containerPort\":10259,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"85eae708\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10259,\\\"containerPort\\\":10259,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.te
rminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:46:19.485443854Z","io.kubernetes.cri-o.Image":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-scheduler:v1.34.0","io.kubernetes.cri-o.ImageRef":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-scheduler\",\"io.kubernetes.pod.name\":\"kube-scheduler-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"17a21a02ffe1f8dd7b43dae71452cdad\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-scheduler-ha-984158_17a21a02ffe1f8dd7b43dae71452cdad/kube-scheduler/
2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-scheduler\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/b7769c0dc387db2817a2192fae2ca0b5ab06b67506fd54e81eac8724cada8d35/merged","io.kubernetes.cri-o.Name":"k8s_kube-scheduler_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/b17e0b9c519a3a36026153f88111e79a608ff665648c4474defb58b5cfaf6d8b/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"b17e0b9c519a3a36026153f88111e79a608ff665648c4474defb58b5cfaf6d8b","io.kubernetes.cri-o.SandboxName":"k8s_kube-scheduler-ha-984158_kube-system_17a21a02ffe1f8dd7b43dae71452cdad_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/etc-hosts\
",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/17a21a02ffe1f8dd7b43dae71452cdad/containers/kube-scheduler/0619f79b\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/scheduler.conf\",\"host_path\":\"/etc/kubernetes/scheduler.conf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-scheduler-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.hash":"17a21a02ffe1f8dd7b43dae71452cdad","kubernetes.io/config.seen":"2025-09-19T22:46:18.961806595Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000
000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2","pid":822,"status":"running","bundle":"/run/containers/storage/overlay-containers/8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2/userdata","rootfs":"/var/lib/containers/storage/overlay/c5a33b5e3ac31267e2463538a0ad9e67be17ebbfb905e94c0e9d15a43a37fdfe/merged","created":"2025-09-19T22:46:19.545704104Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"e9e20c65","io.kubernetes.container.name":"etcd","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":2381,\"containerPort\":2381,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"e9e20c65\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPo
rt\\\":2381,\\\"containerPort\\\":2381,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:46:19.449271076Z","io.kubernetes.cri-o.Image":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.ImageName":"registry.k8s.io/etcd:3.6.4-0","io.kubernetes.cri-o.ImageRef":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"etcd\",\"io.kubernetes.pod.name\":\"etcd-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"b69a60c29223dc4628f1e45acc16ccdb\"}","io.kubernete
s.cri-o.LogPath":"/var/log/pods/kube-system_etcd-ha-984158_b69a60c29223dc4628f1e45acc16ccdb/etcd/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"etcd\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c5a33b5e3ac31267e2463538a0ad9e67be17ebbfb905e94c0e9d15a43a37fdfe/merged","io.kubernetes.cri-o.Name":"k8s_etcd_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/aca77eab195341f9bfeee850a0984b8ce26c195495117003bb595b13a5af2680/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"aca77eab195341f9bfeee850a0984b8ce26c195495117003bb595b13a5af2680","io.kubernetes.cri-o.SandboxName":"k8s_etcd-ha-984158_kube-system_b69a60c29223dc4628f1e45acc16ccdb_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib
/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/b69a60c29223dc4628f1e45acc16ccdb/containers/etcd/34e2f8ea\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/etcd\",\"host_path\":\"/var/lib/minikube/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs/etcd\",\"host_path\":\"/var/lib/minikube/certs/etcd\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"etcd-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"b69a60c29223dc4628f1e45acc16ccdb","kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.49.2:2379","kubernetes.io/config.hash":"b69a60c29223dc4628f1e45acc16ccdb","kubernetes.io/config.seen":"2025-09-19T2
2:46:18.961800811Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461","pid":840,"status":"running","bundle":"/run/containers/storage/overlay-containers/965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461/userdata","rootfs":"/var/lib/containers/storage/overlay/f113fa372b328daab74d27019751ddfd1ddb9a1d158a4e75b95a7c52c405c6c4/merged","created":"2025-09-19T22:46:19.554821732Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"d671eaa0","io.kubernetes.container.name":"kube-apiserver","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":8443,\"containerPort\":8443,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.co
ntainer.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.container.hash\":\"d671eaa0\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":8443,\\\"containerPort\\\":8443,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:46:19.489012262Z","io.kubernetes.cri-o.Image":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-apiserver:v1.34.0","io.kubernetes.cri-o.ImageRef":"90550c43ad2bcfd11fcd5fd27d
2eac5a7ca823be1308884b33dd816ec169be90","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-apiserver\",\"io.kubernetes.pod.name\":\"kube-apiserver-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"a8e2ca3a88a914207b16de44248445e2\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-apiserver-ha-984158_a8e2ca3a88a914207b16de44248445e2/kube-apiserver/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-apiserver\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/f113fa372b328daab74d27019751ddfd1ddb9a1d158a4e75b95a7c52c405c6c4/merged","io.kubernetes.cri-o.Name":"k8s_kube-apiserver_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/7a2352c9ba15d31b2b729265d3a26885bb44ca45f6d9f3b7e775f939fb89cc25/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"7a2352c9ba15d31b2b729265d3a26885bb44ca45f6d9f3b7e775f939fb
89cc25","io.kubernetes.cri-o.SandboxName":"k8s_kube-apiserver-ha-984158_kube-system_a8e2ca3a88a914207b16de44248445e2_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/containers/kube-apiserver/625662ae\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/a8e2ca3a88a914207b16de44248445e2/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_
path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-apiserver-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"a8e2ca3a88a914207b16de44248445e2","kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.49.2:8443","kubernetes.io/config.hash":"a8e2ca3a88a914207b16de44248445e2","kubernetes.io/config.seen":"2025-09-19T22:46:18.961803043Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.propert
y.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"},{"ociVersion":"1.0.2-dev","id":"e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6","pid":852,"status":"running","bundle":"/run/containers/storage/overlay-containers/e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6/userdata","rootfs":"/var/lib/containers/storage/overlay/c264b914bf6139ef613a0cc00f27820455e1cb24f62d2c566377ad12d2382849/merged","created":"2025-09-19T22:46:19.556712502Z","annotations":{"io.container.manager":"cri-o","io.kubernetes.container.hash":"7eaa1830","io.kubernetes.container.name":"kube-controller-manager","io.kubernetes.container.ports":"[{\"name\":\"probe-port\",\"hostPort\":10257,\"containerPort\":10257,\"protocol\":\"TCP\"}]","io.kubernetes.container.restartCount":"2","io.kubernetes.container.terminationMessagePath":"/dev/termination-log","io.kubernetes.container.terminationMessagePolicy":"File","io.kubernetes.cri-o.Annotations":"{\"io.kubernetes.contain
er.hash\":\"7eaa1830\",\"io.kubernetes.container.ports\":\"[{\\\"name\\\":\\\"probe-port\\\",\\\"hostPort\\\":10257,\\\"containerPort\\\":10257,\\\"protocol\\\":\\\"TCP\\\"}]\",\"io.kubernetes.container.restartCount\":\"2\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\"}","io.kubernetes.cri-o.ContainerID":"e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6","io.kubernetes.cri-o.ContainerType":"container","io.kubernetes.cri-o.Created":"2025-09-19T22:46:19.474613868Z","io.kubernetes.cri-o.Image":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.ImageName":"registry.k8s.io/kube-controller-manager:v1.34.0","io.kubernetes.cri-o.ImageRef":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","io.kubernetes.cri-o.Labels":"{\"io.kubernetes.container.name\":\"kube-controller-manager\",\"io.kubernetes.pod.name\":\"
kube-controller-manager-ha-984158\",\"io.kubernetes.pod.namespace\":\"kube-system\",\"io.kubernetes.pod.uid\":\"560e6b05a580a11369967b27d393af16\"}","io.kubernetes.cri-o.LogPath":"/var/log/pods/kube-system_kube-controller-manager-ha-984158_560e6b05a580a11369967b27d393af16/kube-controller-manager/2.log","io.kubernetes.cri-o.Metadata":"{\"name\":\"kube-controller-manager\",\"attempt\":2}","io.kubernetes.cri-o.MountPoint":"/var/lib/containers/storage/overlay/c264b914bf6139ef613a0cc00f27820455e1cb24f62d2c566377ad12d2382849/merged","io.kubernetes.cri-o.Name":"k8s_kube-controller-manager_kube-controller-manager-ha-984158_kube-system_560e6b05a580a11369967b27d393af16_2","io.kubernetes.cri-o.ResolvPath":"/run/containers/storage/overlay-containers/de818e09e70e234296b86f4c43c58dcd49c79f8617daea40d4324baf6ff48cc9/userdata/resolv.conf","io.kubernetes.cri-o.SandboxID":"de818e09e70e234296b86f4c43c58dcd49c79f8617daea40d4324baf6ff48cc9","io.kubernetes.cri-o.SandboxName":"k8s_kube-controller-manager-ha-984158_kube-system_560e6
b05a580a11369967b27d393af16_0","io.kubernetes.cri-o.SeccompProfilePath":"","io.kubernetes.cri-o.Stdin":"false","io.kubernetes.cri-o.StdinOnce":"false","io.kubernetes.cri-o.TTY":"false","io.kubernetes.cri-o.Volumes":"[{\"container_path\":\"/etc/ca-certificates\",\"host_path\":\"/etc/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/dev/termination-log\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/containers/kube-controller-manager/4ae132f8\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/hosts\",\"host_path\":\"/var/lib/kubelet/pods/560e6b05a580a11369967b27d393af16/etc-hosts\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/ssl/certs\",\"host_path\":\"/etc/ssl/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/etc/kubernetes/controller-manager.conf\",\"host_path\":\"/etc/kubernetes/controller-manager.co
nf\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/share/ca-certificates\",\"host_path\":\"/usr/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/var/lib/minikube/certs\",\"host_path\":\"/var/lib/minikube/certs\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/local/share/ca-certificates\",\"host_path\":\"/usr/local/share/ca-certificates\",\"readonly\":true,\"propagation\":0,\"selinux_relabel\":false},{\"container_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"host_path\":\"/usr/libexec/kubernetes/kubelet-plugins/volume/exec\",\"readonly\":false,\"propagation\":0,\"selinux_relabel\":false}]","io.kubernetes.pod.name":"kube-controller-manager-ha-984158","io.kubernetes.pod.namespace":"kube-system","io.kubernetes.pod.terminationGracePeriod":"30","io.kubernetes.pod.uid":"560e6b05a580a11369967b27d393af16","kubernetes.io/config.hash":"560e6b05a580a1136996
7b27d393af16","kubernetes.io/config.seen":"2025-09-19T22:46:18.961804962Z","kubernetes.io/config.source":"file","org.systemd.property.After":"['crio.service']","org.systemd.property.CollectMode":"'inactive-or-failed'","org.systemd.property.DefaultDependencies":"true","org.systemd.property.TimeoutStopUSec":"uint64 30000000"},"owner":"root"}]
	I0919 22:46:20.153032  108877 cri.go:126] list returned 5 containers
	I0919 22:46:20.153055  108877 cri.go:129] container: {ID:28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0 Status:running}
	I0919 22:46:20.153074  108877 cri.go:135] skipping {28f33c04301b217bbdfaf65d4f01fad62c0a184e239a2db9bbfb2d6c1673e1b0 running}: state = "running", want "paused"
	I0919 22:46:20.153090  108877 cri.go:129] container: {ID:59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce Status:running}
	I0919 22:46:20.153097  108877 cri.go:135] skipping {59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce running}: state = "running", want "paused"
	I0919 22:46:20.153120  108877 cri.go:129] container: {ID:8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2 Status:running}
	I0919 22:46:20.153126  108877 cri.go:135] skipping {8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2 running}: state = "running", want "paused"
	I0919 22:46:20.153136  108877 cri.go:129] container: {ID:965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461 Status:running}
	I0919 22:46:20.153144  108877 cri.go:135] skipping {965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461 running}: state = "running", want "paused"
	I0919 22:46:20.153152  108877 cri.go:129] container: {ID:e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6 Status:running}
	I0919 22:46:20.153159  108877 cri.go:135] skipping {e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6 running}: state = "running", want "paused"
	I0919 22:46:20.153217  108877 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 22:46:20.163798  108877 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 22:46:20.163821  108877 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 22:46:20.163868  108877 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 22:46:20.173357  108877 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:46:20.173815  108877 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-984158" does not appear in /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:46:20.173926  108877 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14668/kubeconfig needs updating (will repair): [kubeconfig missing "ha-984158" cluster setting kubeconfig missing "ha-984158" context setting]
	I0919 22:46:20.174266  108877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:20.174929  108877 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0919 22:46:20.175466  108877 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0919 22:46:20.175485  108877 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I0919 22:46:20.175491  108877 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0919 22:46:20.175496  108877 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0919 22:46:20.175508  108877 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0919 22:46:20.175532  108877 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I0919 22:46:20.175951  108877 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 22:46:20.185275  108877 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.49.2
	I0919 22:46:20.185304  108877 kubeadm.go:593] duration metric: took 21.472405ms to restartPrimaryControlPlane
	I0919 22:46:20.185315  108877 kubeadm.go:394] duration metric: took 100.433015ms to StartCluster
	I0919 22:46:20.185333  108877 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:20.185409  108877 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:46:20.186087  108877 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:20.186338  108877 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:46:20.186370  108877 start.go:241] waiting for startup goroutines ...
	I0919 22:46:20.186378  108877 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 22:46:20.186635  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:20.189653  108877 out.go:179] * Enabled addons: 
	I0919 22:46:20.191192  108877 addons.go:514] duration metric: took 4.807431ms for enable addons: enabled=[]
	I0919 22:46:20.191234  108877 start.go:246] waiting for cluster config update ...
	I0919 22:46:20.191247  108877 start.go:255] writing updated cluster config ...
	I0919 22:46:20.193246  108877 out.go:203] 
	I0919 22:46:20.195195  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:20.195308  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:20.197094  108877 out.go:179] * Starting "ha-984158-m02" control-plane node in "ha-984158" cluster
	I0919 22:46:20.198374  108877 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:46:20.199729  108877 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:46:20.200930  108877 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:46:20.200958  108877 cache.go:58] Caching tarball of preloaded images
	I0919 22:46:20.200957  108877 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:46:20.201052  108877 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:46:20.201070  108877 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:46:20.201207  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:20.225517  108877 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:46:20.225538  108877 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:46:20.225559  108877 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:46:20.225589  108877 start.go:360] acquireMachinesLock for ha-984158-m02: {Name:mk33ccd18791cf0a87d18f7af68677fa10224c04 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:46:20.225650  108877 start.go:364] duration metric: took 41.873µs to acquireMachinesLock for "ha-984158-m02"
	I0919 22:46:20.225673  108877 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:46:20.225679  108877 fix.go:54] fixHost starting: m02
	I0919 22:46:20.225965  108877 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:46:20.246500  108877 fix.go:112] recreateIfNeeded on ha-984158-m02: state=Stopped err=<nil>
	W0919 22:46:20.246530  108877 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:46:20.248555  108877 out.go:252] * Restarting existing docker container for "ha-984158-m02" ...
	I0919 22:46:20.248640  108877 cli_runner.go:164] Run: docker start ha-984158-m02
	I0919 22:46:20.515186  108877 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:46:20.536840  108877 kic.go:430] container "ha-984158-m02" state is running.
	I0919 22:46:20.537225  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:46:20.557968  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:20.558248  108877 machine.go:93] provisionDockerMachine start ...
	I0919 22:46:20.558317  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:20.577500  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:20.577734  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:46:20.577750  108877 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:46:20.578405  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:37082->127.0.0.1:32843: read: connection reset by peer
	I0919 22:46:23.726391  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:46:23.726419  108877 ubuntu.go:182] provisioning hostname "ha-984158-m02"
	I0919 22:46:23.726483  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:23.757624  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:23.757898  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:46:23.757918  108877 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m02 && echo "ha-984158-m02" | sudo tee /etc/hostname
	I0919 22:46:23.968819  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-984158-m02
	
	I0919 22:46:23.968912  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:24.000480  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:24.000783  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:46:24.000820  108877 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:46:24.160931  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:46:24.160963  108877 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:46:24.160983  108877 ubuntu.go:190] setting up certificates
	I0919 22:46:24.160993  108877 provision.go:84] configureAuth start
	I0919 22:46:24.161046  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:46:24.183569  108877 provision.go:143] copyHostCerts
	I0919 22:46:24.183623  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:46:24.183664  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:46:24.183673  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:46:24.183765  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:46:24.183860  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:46:24.183887  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:46:24.183893  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:46:24.183935  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:46:24.184016  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:46:24.184042  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:46:24.184052  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:46:24.184119  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:46:24.184203  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m02 san=[127.0.0.1 192.168.49.3 ha-984158-m02 localhost minikube]
	I0919 22:46:24.480167  108877 provision.go:177] copyRemoteCerts
	I0919 22:46:24.480231  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:46:24.480275  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:24.498555  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:46:24.595854  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0919 22:46:24.595918  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 22:46:24.622515  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0919 22:46:24.622579  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0919 22:46:24.649573  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0919 22:46:24.649635  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 22:46:24.676266  108877 provision.go:87] duration metric: took 515.262319ms to configureAuth
	I0919 22:46:24.676306  108877 ubuntu.go:206] setting minikube options for container-runtime
	I0919 22:46:24.676727  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:24.676896  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:24.696841  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:24.697083  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32843 <nil> <nil>}
	I0919 22:46:24.697124  108877 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 22:46:25.086955  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 22:46:25.086982  108877 machine.go:96] duration metric: took 4.528716196s to provisionDockerMachine
	I0919 22:46:25.086996  108877 start.go:293] postStartSetup for "ha-984158-m02" (driver="docker")
	I0919 22:46:25.087011  108877 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 22:46:25.087070  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 22:46:25.087137  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:25.112242  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:46:25.223680  108877 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 22:46:25.229296  108877 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 22:46:25.229349  108877 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 22:46:25.229360  108877 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 22:46:25.229368  108877 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 22:46:25.229381  108877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 22:46:25.229444  108877 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 22:46:25.229556  108877 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 22:46:25.229575  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /etc/ssl/certs/181752.pem
	I0919 22:46:25.229692  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 22:46:25.242078  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:46:25.278732  108877 start.go:296] duration metric: took 191.719996ms for postStartSetup
	I0919 22:46:25.278817  108877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:46:25.278874  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:25.304273  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:46:25.421654  108877 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:46:25.445209  108877 fix.go:56] duration metric: took 5.219521661s for fixHost
	I0919 22:46:25.445242  108877 start.go:83] releasing machines lock for "ha-984158-m02", held for 5.219578683s
	I0919 22:46:25.445316  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m02
	I0919 22:46:25.479010  108877 out.go:179] * Found network options:
	I0919 22:46:25.480818  108877 out.go:179]   - NO_PROXY=192.168.49.2
	W0919 22:46:25.482353  108877 proxy.go:120] fail to check proxy env: Error ip not in block
	W0919 22:46:25.482414  108877 proxy.go:120] fail to check proxy env: Error ip not in block
	I0919 22:46:25.482511  108877 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 22:46:25.482570  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:25.482798  108877 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 22:46:25.482835  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m02
	I0919 22:46:25.512551  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:46:25.514422  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32843 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m02/id_rsa Username:docker}
	I0919 22:46:25.765126  108877 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 22:46:25.771133  108877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:46:25.782542  108877 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 22:46:25.782668  108877 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 22:46:25.793362  108877 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 22:46:25.793384  108877 start.go:495] detecting cgroup driver to use...
	I0919 22:46:25.793413  108877 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 22:46:25.793446  108877 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 22:46:25.806649  108877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 22:46:25.820020  108877 docker.go:218] disabling cri-docker service (if available) ...
	I0919 22:46:25.820150  108877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 22:46:25.834424  108877 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 22:46:25.846716  108877 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 22:46:25.979915  108877 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 22:46:26.160733  108877 docker.go:234] disabling docker service ...
	I0919 22:46:26.160800  108877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 22:46:26.180405  108877 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 22:46:26.193545  108877 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 22:46:26.323966  108877 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 22:46:26.454055  108877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 22:46:26.471608  108877 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 22:46:26.491683  108877 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 22:46:26.491759  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.503650  108877 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 22:46:26.503736  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.515882  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.528470  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.540665  108877 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 22:46:26.551186  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.562785  108877 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.576061  108877 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 22:46:26.588723  108877 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 22:46:26.598603  108877 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 22:46:26.608664  108877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:46:26.737407  108877 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 22:46:27.001831  108877 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 22:46:27.001907  108877 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 22:46:27.006455  108877 start.go:563] Will wait 60s for crictl version
	I0919 22:46:27.006516  108877 ssh_runner.go:195] Run: which crictl
	I0919 22:46:27.010137  108877 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 22:46:27.048954  108877 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 22:46:27.049041  108877 ssh_runner.go:195] Run: crio --version
	I0919 22:46:27.089149  108877 ssh_runner.go:195] Run: crio --version
	I0919 22:46:27.131001  108877 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 22:46:27.133021  108877 out.go:179]   - env NO_PROXY=192.168.49.2
	I0919 22:46:27.135238  108877 cli_runner.go:164] Run: docker network inspect ha-984158 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 22:46:27.153890  108877 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0919 22:46:27.158228  108877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:46:27.171318  108877 mustload.go:65] Loading cluster: ha-984158
	I0919 22:46:27.171533  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:27.171738  108877 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:46:27.193596  108877 host.go:66] Checking if "ha-984158" exists ...
	I0919 22:46:27.193834  108877 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158 for IP: 192.168.49.3
	I0919 22:46:27.193846  108877 certs.go:194] generating shared ca certs ...
	I0919 22:46:27.193859  108877 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:46:27.193962  108877 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 22:46:27.194001  108877 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 22:46:27.194010  108877 certs.go:256] generating profile certs ...
	I0919 22:46:27.194079  108877 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key
	I0919 22:46:27.194165  108877 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key.73e1c648
	I0919 22:46:27.194224  108877 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key
	I0919 22:46:27.194238  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0919 22:46:27.194253  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0919 22:46:27.194265  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0919 22:46:27.194278  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0919 22:46:27.194297  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0919 22:46:27.194310  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0919 22:46:27.194323  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0919 22:46:27.194339  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0919 22:46:27.194411  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 22:46:27.194441  108877 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 22:46:27.194450  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 22:46:27.194471  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 22:46:27.194492  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 22:46:27.194516  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 22:46:27.194565  108877 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 22:46:27.194590  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:27.194602  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem -> /usr/share/ca-certificates/18175.pem
	I0919 22:46:27.194615  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> /usr/share/ca-certificates/181752.pem
	I0919 22:46:27.194657  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158
	I0919 22:46:27.213026  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32838 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158/id_rsa Username:docker}
	I0919 22:46:27.304420  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0919 22:46:27.312939  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0919 22:46:27.335604  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0919 22:46:27.340863  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0919 22:46:27.358586  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0919 22:46:27.362601  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0919 22:46:27.378555  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0919 22:46:27.383438  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0919 22:46:27.400539  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0919 22:46:27.404743  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0919 22:46:27.422198  108877 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0919 22:46:27.427849  108877 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0919 22:46:27.444001  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 22:46:27.474378  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 22:46:27.505532  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 22:46:27.533300  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 22:46:27.561118  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0919 22:46:27.590142  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 22:46:27.618324  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 22:46:27.647550  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 22:46:27.676298  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 22:46:27.707393  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 22:46:27.746925  108877 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 22:46:27.783270  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0919 22:46:27.808955  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0919 22:46:27.835057  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0919 22:46:27.859054  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0919 22:46:27.883780  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0919 22:46:27.905848  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0919 22:46:27.929554  108877 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0919 22:46:27.949528  108877 ssh_runner.go:195] Run: openssl version
	I0919 22:46:27.955293  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 22:46:27.966171  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 22:46:27.970845  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 22:46:27.970917  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 22:46:27.978879  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 22:46:27.988983  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 22:46:27.999569  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 22:46:28.004058  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 22:46:28.004197  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 22:46:28.011324  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 22:46:28.022191  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 22:46:28.033554  108877 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:28.037405  108877 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:28.037468  108877 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 22:46:28.044623  108877 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 22:46:28.054934  108877 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 22:46:28.059006  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 22:46:28.066671  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 22:46:28.074789  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 22:46:28.083169  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 22:46:28.090396  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 22:46:28.097472  108877 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 22:46:28.104903  108877 kubeadm.go:926] updating node {m02 192.168.49.3 8443 v1.34.0 crio true true} ...
	I0919 22:46:28.105012  108877 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=ha-984158-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.3
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:ha-984158 Namespace:default APIServerHAVIP:192.168.49.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 22:46:28.105038  108877 kube-vip.go:115] generating kube-vip config ...
	I0919 22:46:28.105077  108877 ssh_runner.go:195] Run: sudo sh -c "lsmod | grep ip_vs"
	I0919 22:46:28.118386  108877 kube-vip.go:163] giving up enabling control-plane load-balancing as ipvs kernel modules appears not to be available: sudo sh -c "lsmod | grep ip_vs": Process exited with status 1
	stdout:
	
	stderr:
	I0919 22:46:28.118444  108877 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.49.254
	    - name: prometheus_server
	      value: :2112
	    image: ghcr.io/kube-vip/kube-vip:v1.0.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0919 22:46:28.118499  108877 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 22:46:28.128992  108877 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 22:46:28.129066  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0919 22:46:28.138683  108877 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0919 22:46:28.157570  108877 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 22:46:28.179790  108877 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1358 bytes)
	I0919 22:46:28.199021  108877 ssh_runner.go:195] Run: grep 192.168.49.254	control-plane.minikube.internal$ /etc/hosts
	I0919 22:46:28.203009  108877 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 22:46:28.215658  108877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:46:28.329589  108877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:46:28.341890  108877 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.168.49.3 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 22:46:28.342184  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:28.345551  108877 out.go:179] * Verifying Kubernetes components...
	I0919 22:46:28.347145  108877 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 22:46:28.465160  108877 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 22:46:28.480699  108877 kapi.go:59] client config for ha-984158: &rest.Config{Host:"https://192.168.49.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.crt", KeyFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/client.key", CAFile:"/home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)
}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x27f4a00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0919 22:46:28.480762  108877 kubeadm.go:483] Overriding stale ClientConfig host https://192.168.49.254:8443 with https://192.168.49.2:8443
	I0919 22:46:28.480949  108877 node_ready.go:35] waiting up to 6m0s for node "ha-984158-m02" to be "Ready" ...
	I0919 22:46:28.489433  108877 node_ready.go:49] node "ha-984158-m02" is "Ready"
	I0919 22:46:28.489464  108877 node_ready.go:38] duration metric: took 8.500754ms for node "ha-984158-m02" to be "Ready" ...
	I0919 22:46:28.489478  108877 api_server.go:52] waiting for apiserver process to appear ...
	I0919 22:46:28.489524  108877 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 22:46:28.502734  108877 api_server.go:72] duration metric: took 160.79998ms to wait for apiserver process to appear ...
	I0919 22:46:28.502770  108877 api_server.go:88] waiting for apiserver healthz status ...
	I0919 22:46:28.502793  108877 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0919 22:46:28.508523  108877 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0919 22:46:28.509513  108877 api_server.go:141] control plane version: v1.34.0
	I0919 22:46:28.509537  108877 api_server.go:131] duration metric: took 6.759754ms to wait for apiserver health ...
	I0919 22:46:28.509545  108877 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 22:46:28.520364  108877 system_pods.go:59] 24 kube-system pods found
	I0919 22:46:28.520531  108877 system_pods.go:61] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:46:28.520549  108877 system_pods.go:61] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:46:28.520562  108877 system_pods.go:61] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:46:28.520572  108877 system_pods.go:61] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:46:28.520578  108877 system_pods.go:61] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:46:28.520583  108877 system_pods.go:61] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:46:28.520651  108877 system_pods.go:61] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:46:28.520667  108877 system_pods.go:61] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:46:28.520700  108877 system_pods.go:61] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:46:28.520727  108877 system_pods.go:61] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:46:28.520733  108877 system_pods.go:61] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:46:28.520743  108877 system_pods.go:61] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:46:28.520750  108877 system_pods.go:61] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:46:28.520756  108877 system_pods.go:61] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:46:28.520761  108877 system_pods.go:61] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:46:28.520790  108877 system_pods.go:61] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:46:28.520804  108877 system_pods.go:61] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:46:28.520821  108877 system_pods.go:61] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:46:28.520838  108877 system_pods.go:61] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:46:28.520885  108877 system_pods.go:61] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:46:28.520900  108877 system_pods.go:61] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:46:28.520907  108877 system_pods.go:61] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:46:28.520913  108877 system_pods.go:61] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:46:28.520918  108877 system_pods.go:61] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:46:28.520964  108877 system_pods.go:74] duration metric: took 11.374418ms to wait for pod list to return data ...
	I0919 22:46:28.520985  108877 default_sa.go:34] waiting for default service account to be created ...
	I0919 22:46:28.533346  108877 default_sa.go:45] found service account: "default"
	I0919 22:46:28.533374  108877 default_sa.go:55] duration metric: took 12.372821ms for default service account to be created ...
	I0919 22:46:28.533386  108877 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 22:46:28.540037  108877 system_pods.go:86] 24 kube-system pods found
	I0919 22:46:28.540077  108877 system_pods.go:89] "coredns-66bc5c9577-5gnbx" [4a9e64cd-95e6-4964-a16f-3237de3d28fa] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:46:28.540086  108877 system_pods.go:89] "coredns-66bc5c9577-ltjmz" [8b356781-e779-480c-ab92-e7f13bc4317d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 22:46:28.540093  108877 system_pods.go:89] "etcd-ha-984158" [76d820c7-ccf8-41cd-8c90-c5749ba503cc] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:46:28.540130  108877 system_pods.go:89] "etcd-ha-984158-m02" [16cfe958-8c21-44a4-8ad7-f4940ed19e86] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 22:46:28.540138  108877 system_pods.go:89] "etcd-ha-984158-m03" [f9f54b34-f295-4ce7-b995-55792c2edbc4] Running
	I0919 22:46:28.540143  108877 system_pods.go:89] "kindnet-269nt" [9296dd27-c9a7-4e97-af20-61639b4a7d73] Running
	I0919 22:46:28.540148  108877 system_pods.go:89] "kindnet-rd882" [ec2a73f4-ff21-420f-b5de-5a9cd6c601c9] Running
	I0919 22:46:28.540153  108877 system_pods.go:89] "kindnet-th979" [c1edb1ac-ed67-4bf4-b6b6-a59e86acfd0b] Running
	I0919 22:46:28.540160  108877 system_pods.go:89] "kube-apiserver-ha-984158" [a526d525-c49d-4ed9-a5bc-1a2c80c52528] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:46:28.540167  108877 system_pods.go:89] "kube-apiserver-ha-984158-m02" [4f4fcdf5-4d7b-4240-aaa3-04fa67bdb51a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 22:46:28.540171  108877 system_pods.go:89] "kube-apiserver-ha-984158-m03" [647b551a-5187-415e-8e6c-9b4196ade9cc] Running
	I0919 22:46:28.540177  108877 system_pods.go:89] "kube-controller-manager-ha-984158" [6320120b-bc72-4902-b39f-e5ec46c0cd8b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:46:28.540186  108877 system_pods.go:89] "kube-controller-manager-ha-984158-m02" [bffa9750-9c6e-4e4b-bb0e-b757a9789f99] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 22:46:28.540190  108877 system_pods.go:89] "kube-controller-manager-ha-984158-m03" [b8e9c896-7a8a-4040-b0b9-870c657ba1fc] Running
	I0919 22:46:28.540197  108877 system_pods.go:89] "kube-proxy-hdxxn" [49635dd7-bede-4f86-b284-ceeda6ce55a8] Running
	I0919 22:46:28.540201  108877 system_pods.go:89] "kube-proxy-k2drm" [040bf3f7-8d97-4799-b3a2-12b57eec38ef] Running
	I0919 22:46:28.540206  108877 system_pods.go:89] "kube-proxy-plrn2" [3191b46b-398a-43c3-94f2-09797f6f8a50] Running
	I0919 22:46:28.540211  108877 system_pods.go:89] "kube-scheduler-ha-984158" [957e34f5-834c-4f42-a840-a9d1054ca69f] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:46:28.540216  108877 system_pods.go:89] "kube-scheduler-ha-984158-m02" [5d8bd38d-c26f-407a-a91b-8786a0218671] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 22:46:28.540224  108877 system_pods.go:89] "kube-scheduler-ha-984158-m03" [c6a2aead-8896-40c4-9134-4b52125b9b9c] Running
	I0919 22:46:28.540228  108877 system_pods.go:89] "kube-vip-ha-984158" [e76d18e9-f1e9-44ce-b006-26f3c21b3b1b] Running
	I0919 22:46:28.540231  108877 system_pods.go:89] "kube-vip-ha-984158-m02" [88a0bd5d-c27f-4c68-ac0d-ea685622a975] Running
	I0919 22:46:28.540234  108877 system_pods.go:89] "kube-vip-ha-984158-m03" [edbc937a-6f7e-42da-a90d-16e725f620c6] Running
	I0919 22:46:28.540237  108877 system_pods.go:89] "storage-provisioner" [383be09b-7235-4b37-9e7b-be2a8a866e4a] Running
	I0919 22:46:28.540244  108877 system_pods.go:126] duration metric: took 6.851735ms to wait for k8s-apps to be running ...
	I0919 22:46:28.540253  108877 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 22:46:28.540297  108877 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 22:46:28.553240  108877 system_svc.go:56] duration metric: took 12.975587ms WaitForService to wait for kubelet
	I0919 22:46:28.553269  108877 kubeadm.go:578] duration metric: took 211.340401ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 22:46:28.553284  108877 node_conditions.go:102] verifying NodePressure condition ...
	I0919 22:46:28.556598  108877 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:46:28.556630  108877 node_conditions.go:123] node cpu capacity is 8
	I0919 22:46:28.556644  108877 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 22:46:28.556649  108877 node_conditions.go:123] node cpu capacity is 8
	I0919 22:46:28.556655  108877 node_conditions.go:105] duration metric: took 3.365055ms to run NodePressure ...
	I0919 22:46:28.556668  108877 start.go:241] waiting for startup goroutines ...
	I0919 22:46:28.556700  108877 start.go:255] writing updated cluster config ...
	I0919 22:46:28.559365  108877 out.go:203] 
	I0919 22:46:28.561049  108877 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:28.561185  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:28.563177  108877 out.go:179] * Starting "ha-984158-m04" worker node in "ha-984158" cluster
	I0919 22:46:28.565659  108877 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:46:28.567464  108877 out.go:179] * Pulling base image v0.0.48 ...
	I0919 22:46:28.569620  108877 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 22:46:28.569658  108877 cache.go:58] Caching tarball of preloaded images
	I0919 22:46:28.569731  108877 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:46:28.569801  108877 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 22:46:28.569818  108877 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 22:46:28.570024  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:28.591350  108877 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 22:46:28.591370  108877 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 22:46:28.591387  108877 cache.go:232] Successfully downloaded all kic artifacts
	I0919 22:46:28.591426  108877 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:46:28.591497  108877 start.go:364] duration metric: took 50.571µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:46:28.591521  108877 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:46:28.591532  108877 fix.go:54] fixHost starting: m04
	I0919 22:46:28.591813  108877 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:46:28.611528  108877 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Stopped err=<nil>
	W0919 22:46:28.611563  108877 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:46:28.614509  108877 out.go:252] * Restarting existing docker container for "ha-984158-m04" ...
	I0919 22:46:28.614597  108877 cli_runner.go:164] Run: docker start ha-984158-m04
	I0919 22:46:28.891706  108877 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:46:28.909967  108877 kic.go:430] container "ha-984158-m04" state is running.
	I0919 22:46:28.910342  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:46:28.932443  108877 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/ha-984158/config.json ...
	I0919 22:46:28.932786  108877 machine.go:93] provisionDockerMachine start ...
	I0919 22:46:28.932866  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:46:28.952373  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:46:28.952595  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:46:28.952610  108877 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:46:28.953238  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:58822->127.0.0.1:32848: read: connection reset by peer
	I0919 22:46:31.992434  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:35.030266  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:38.067360  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:41.104918  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:44.141546  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:47.180486  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:50.217377  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:53.253625  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:56.290653  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:46:59.328801  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:02.366335  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:05.404369  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:08.441218  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:11.478040  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:14.517286  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:17.555077  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:20.590775  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:23.628032  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:26.665685  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:29.702885  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:32.739715  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:35.776525  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:38.813501  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:41.850732  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:44.888481  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:47.925335  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:50.962781  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:54.001543  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:47:57.039219  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:00.076356  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:03.114796  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:06.153655  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:09.190434  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:12.228005  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:15.266169  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:18.303890  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:21.341005  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:24.378793  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:27.415263  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:30.452522  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:33.489642  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:36.526922  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:39.565971  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:42.604085  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:45.642453  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:48.680255  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:51.718847  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:54.755973  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:48:57.794058  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:00.830885  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:03.867241  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:06.905329  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:09.942692  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:12.979007  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:16.016090  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:19.052722  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:22.090878  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:25.128424  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:28.164980  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:31.165214  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:49:31.165246  108877 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 22:49:31.165343  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:49:31.186337  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:49:31.186559  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:49:31.186572  108877 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m04 && echo "ha-984158-m04" | sudo tee /etc/hostname
	I0919 22:49:31.222041  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:34.260373  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:37.296429  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:40.333177  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:43.369818  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:46.407326  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:49.445653  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:52.481801  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:55.519688  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:49:58.556954  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:01.594704  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:04.631982  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:07.671074  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:10.707659  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:13.743738  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:16.780434  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:19.818728  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:22.856313  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:25.895572  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:28.933186  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:31.971330  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:35.009667  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:38.046268  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:41.085862  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:44.123308  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:47.162970  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:50.201144  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:53.237530  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:56.277091  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:50:59.315180  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:02.352338  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:05.393758  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:08.429927  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:11.467205  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:14.505353  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:17.541220  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:20.577894  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:23.615430  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:26.652560  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:29.692571  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:32.729044  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:35.767750  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:38.804231  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:41.841785  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:44.879839  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:47.915818  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:50.951715  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:53.987351  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:51:57.023082  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:00.061237  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:03.100535  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:06.138086  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:09.175795  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:12.212791  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:15.251952  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:18.287922  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:21.324066  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:24.361564  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:27.399277  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:30.435299  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:33.436225  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:52:33.436347  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:52:33.457501  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:52:33.457777  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:52:33.457803  108877 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-984158-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-984158-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-984158-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 22:52:33.496865  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:36.535257  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:39.572013  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:42.609779  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:45.647614  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:48.684847  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:51.721701  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:54.759074  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:52:57.796080  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:00.832366  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:03.868817  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:06.905935  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:09.942978  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:12.979706  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:16.016675  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:19.056677  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:22.094715  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:25.132270  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:28.169494  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:31.206055  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:34.243569  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:37.279505  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:40.316595  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:43.353466  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:46.390229  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:49.429152  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:52.466242  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:55.505090  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:53:58.542171  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:01.579326  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:04.618595  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:07.655460  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:10.694154  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:13.730639  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:16.768164  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:19.806285  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:22.841871  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:25.880314  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:28.916546  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:31.954063  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:34.990475  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:38.028210  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:41.064783  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:44.103312  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:47.140700  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:50.178632  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:53.215692  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:56.252602  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:54:59.291839  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:02.328340  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:05.366220  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:08.404407  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:11.444828  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:14.482607  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:17.519054  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:20.556896  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:23.594412  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:26.631873  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:29.668984  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:32.707149  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:35.708797  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 22:55:35.708827  108877 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 22:55:35.708848  108877 ubuntu.go:190] setting up certificates
	I0919 22:55:35.708859  108877 provision.go:84] configureAuth start
	I0919 22:55:35.708915  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:35.730835  108877 provision.go:143] copyHostCerts
	I0919 22:55:35.730877  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:35.730913  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:35.730922  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:35.731023  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:35.731145  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:35.731168  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:35.731175  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:35.731212  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:35.731268  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:35.731288  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:35.731295  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:35.731320  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:35.731382  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:36.000694  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:36.000754  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:36.000792  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:36.019214  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:36.055827  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:36.055873  108877 retry.go:31] will retry after 182.097125ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:36.274693  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:36.274733  108877 retry.go:31] will retry after 386.768315ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:36.698187  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:36.698226  108877 retry.go:31] will retry after 362.057256ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:37.098814  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:37.098849  108877 retry.go:31] will retry after 787.271133ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:37.923015  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:37.923091  108877 provision.go:87] duration metric: took 2.21422803s to configureAuth
	W0919 22:55:37.923097  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:37.923153  108877 retry.go:31] will retry after 82.874µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:37.924303  108877 provision.go:84] configureAuth start
	I0919 22:55:37.924373  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:37.943722  108877 provision.go:143] copyHostCerts
	I0919 22:55:37.943762  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:37.943800  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:37.943812  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:37.943881  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:37.943977  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:37.944003  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:37.944013  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:37.944047  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:37.944176  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:37.944202  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:37.944212  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:37.944250  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:37.944357  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:38.121946  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:38.122004  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:38.122068  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:38.140663  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:38.177846  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:38.177874  108877 retry.go:31] will retry after 202.591135ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:38.418642  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:38.418669  108877 retry.go:31] will retry after 500.457311ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:38.956500  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:38.956544  108877 retry.go:31] will retry after 832.609802ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:39.826083  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:39.826197  108877 provision.go:87] duration metric: took 1.901874989s to configureAuth
	W0919 22:55:39.826209  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:39.826224  108877 retry.go:31] will retry after 191.755µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:39.827360  108877 provision.go:84] configureAuth start
	I0919 22:55:39.827427  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:39.845574  108877 provision.go:143] copyHostCerts
	I0919 22:55:39.845617  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:39.845646  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:39.845655  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:39.845715  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:39.845813  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:39.845833  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:39.845840  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:39.845863  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:39.845922  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:39.845939  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:39.845945  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:39.845964  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:39.846040  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:39.978299  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:39.978353  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:39.978404  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:39.996929  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:40.036821  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:40.036849  108877 retry.go:31] will retry after 355.315524ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:40.430448  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:40.430479  108877 retry.go:31] will retry after 524.043693ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:40.995748  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:40.995788  108877 retry.go:31] will retry after 825.079811ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:41.857396  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:41.857496  108877 provision.go:87] duration metric: took 2.030120822s to configureAuth
	W0919 22:55:41.857504  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:41.857517  108877 retry.go:31] will retry after 196.455µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:41.858679  108877 provision.go:84] configureAuth start
	I0919 22:55:41.858761  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:41.877440  108877 provision.go:143] copyHostCerts
	I0919 22:55:41.877476  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:41.877504  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:41.877510  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:41.877569  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:41.877646  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:41.877664  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:41.877671  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:41.877692  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:41.877735  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:41.877752  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:41.877757  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:41.877775  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:41.877893  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:42.172702  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:42.172767  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:42.172802  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:42.191680  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:42.229220  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:42.229251  108877 retry.go:31] will retry after 337.452362ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:42.604511  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:42.604549  108877 retry.go:31] will retry after 484.976043ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:43.128620  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:43.128659  108877 retry.go:31] will retry after 309.196582ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:43.475021  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:43.475061  108877 retry.go:31] will retry after 537.150728ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:44.048722  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.048824  108877 provision.go:87] duration metric: took 2.190120686s to configureAuth
	W0919 22:55:44.048837  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.048852  108877 retry.go:31] will retry after 485.508µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.049993  108877 provision.go:84] configureAuth start
	I0919 22:55:44.050139  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:44.068794  108877 provision.go:143] copyHostCerts
	I0919 22:55:44.068840  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:44.068876  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:44.068888  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:44.068955  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:44.069097  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:44.069161  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:44.069170  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:44.069213  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:44.069302  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:44.069327  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:44.069334  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:44.069367  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:44.069465  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:44.149950  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:44.150042  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:44.150080  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:44.170311  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:44.208034  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.208067  108877 retry.go:31] will retry after 317.83838ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:44.562094  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.562156  108877 retry.go:31] will retry after 368.430243ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:44.966948  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:44.966999  108877 retry.go:31] will retry after 300.011867ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:45.302980  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:45.303022  108877 retry.go:31] will retry after 670.167345ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:46.008703  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.008777  108877 provision.go:87] duration metric: took 1.958765521s to configureAuth
	W0919 22:55:46.008786  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.008795  108877 retry.go:31] will retry after 402.409µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.009909  108877 provision.go:84] configureAuth start
	I0919 22:55:46.009981  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:46.028169  108877 provision.go:143] copyHostCerts
	I0919 22:55:46.028208  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:46.028244  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:46.028257  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:46.028319  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:46.028426  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:46.028453  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:46.028460  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:46.028494  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:46.028559  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:46.028584  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:46.028593  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:46.028622  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:46.028752  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:46.085067  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:46.085149  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:46.085194  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:46.104771  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:46.141286  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.141321  108877 retry.go:31] will retry after 207.521471ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:46.387207  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.387241  108877 retry.go:31] will retry after 188.974379ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:46.613516  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:46.613549  108877 retry.go:31] will retry after 623.504755ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:47.274171  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:47.274247  108877 retry.go:31] will retry after 293.739201ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:47.568796  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:47.587183  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:47.626566  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:47.626603  108877 retry.go:31] will retry after 297.290434ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:47.959843  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:47.959875  108877 retry.go:31] will retry after 308.614989ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:48.306199  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:48.306228  108877 retry.go:31] will retry after 332.873983ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:48.677794  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:48.677820  108877 retry.go:31] will retry after 515.194731ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:49.229678  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:49.229852  108877 provision.go:87] duration metric: took 3.219921943s to configureAuth
	W0919 22:55:49.229871  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:49.229885  108877 retry.go:31] will retry after 771.906µs: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:49.231039  108877 provision.go:84] configureAuth start
	I0919 22:55:49.231132  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:49.249933  108877 provision.go:143] copyHostCerts
	I0919 22:55:49.249972  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:49.250002  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:49.250011  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:49.250071  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:49.250213  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:49.250238  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:49.250245  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:49.250271  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:49.250344  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:49.250363  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:49.250378  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:49.250402  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:49.250471  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:49.448490  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:49.448554  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:49.448598  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:49.469591  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:49.505587  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:49.505623  108877 retry.go:31] will retry after 170.346142ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:49.713640  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:49.713675  108877 retry.go:31] will retry after 510.004107ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:50.260537  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:50.260571  108877 retry.go:31] will retry after 538.129291ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:50.835123  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:50.835210  108877 retry.go:31] will retry after 334.002809ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:51.169877  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:51.188990  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:51.226528  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:51.226556  108877 retry.go:31] will retry after 188.622401ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:51.451939  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:51.451970  108877 retry.go:31] will retry after 246.781671ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:51.738861  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:51.738913  108877 retry.go:31] will retry after 687.433161ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:52.463132  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:52.463228  108877 provision.go:87] duration metric: took 3.232167601s to configureAuth
	W0919 22:55:52.463242  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:52.463253  108877 retry.go:31] will retry after 1.470197ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:52.465465  108877 provision.go:84] configureAuth start
	I0919 22:55:52.465539  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:52.484373  108877 provision.go:143] copyHostCerts
	I0919 22:55:52.484410  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:52.484436  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:52.484445  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:52.484498  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:52.484585  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:52.484603  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:52.484607  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:52.484629  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:52.484686  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:52.484704  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:52.484708  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:52.484726  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:52.484789  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:52.776772  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:52.776836  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:52.776869  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:52.794899  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:52.833693  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:52.833739  108877 retry.go:31] will retry after 239.768811ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:53.110629  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:53.110665  108877 retry.go:31] will retry after 481.507936ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:53.629448  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:53.629481  108877 retry.go:31] will retry after 679.192834ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:54.344745  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:54.344825  108877 retry.go:31] will retry after 299.898432ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:54.645343  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:54.664630  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:54.700188  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:54.700227  108877 retry.go:31] will retry after 173.861141ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:54.910656  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:54.910700  108877 retry.go:31] will retry after 446.087955ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:55.394429  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:55.394463  108877 retry.go:31] will retry after 492.588436ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:55.925984  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:55.926132  108877 provision.go:87] duration metric: took 3.46064756s to configureAuth
	W0919 22:55:55.926146  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:55.926157  108877 retry.go:31] will retry after 1.103973ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:55.928314  108877 provision.go:84] configureAuth start
	I0919 22:55:55.928383  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:55.946349  108877 provision.go:143] copyHostCerts
	I0919 22:55:55.946384  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:55.946414  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:55.946423  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:55.946479  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:55.946566  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:55.946587  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:55.946594  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:55.946616  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:55.946677  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:55.946695  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:55.946698  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:55.946718  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:55.946783  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:55.989895  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:55.989952  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:55.989992  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:56.010643  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:56.046843  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:56.046874  108877 retry.go:31] will retry after 200.709085ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:56.284529  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:56.284563  108877 retry.go:31] will retry after 260.402259ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:56.584328  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:56.584356  108877 retry.go:31] will retry after 403.951779ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:57.027461  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:57.027496  108877 retry.go:31] will retry after 769.133652ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:57.834789  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:57.834897  108877 provision.go:87] duration metric: took 1.906563875s to configureAuth
	W0919 22:55:57.834928  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:57.834952  108877 retry.go:31] will retry after 2.547029ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:57.838182  108877 provision.go:84] configureAuth start
	I0919 22:55:57.838251  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:55:57.857889  108877 provision.go:143] copyHostCerts
	I0919 22:55:57.857938  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:57.857978  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:55:57.857992  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:55:57.858214  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:55:57.858453  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:57.858500  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:55:57.858507  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:55:57.858547  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:55:57.858631  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:57.858652  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:55:57.858656  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:55:57.858686  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:55:57.858755  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:55:57.923859  108877 provision.go:177] copyRemoteCerts
	I0919 22:55:57.923932  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:55:57.923988  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:57.942482  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:57.978505  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:57.978531  108877 retry.go:31] will retry after 131.970521ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:58.146397  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:58.146425  108877 retry.go:31] will retry after 530.399158ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:58.712484  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:58.712511  108877 retry.go:31] will retry after 786.372545ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:55:59.534836  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:59.534922  108877 retry.go:31] will retry after 168.385695ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:59.704394  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:55:59.724227  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:55:59.760581  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:55:59.760612  108877 retry.go:31] will retry after 247.132588ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:00.044197  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:00.044224  108877 retry.go:31] will retry after 336.127105ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:00.416602  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:00.416636  108877 retry.go:31] will retry after 720.277952ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:01.173095  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:01.173217  108877 provision.go:87] duration metric: took 3.335013579s to configureAuth
	W0919 22:56:01.173229  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:01.173243  108877 retry.go:31] will retry after 2.798832ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:01.176494  108877 provision.go:84] configureAuth start
	I0919 22:56:01.176575  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:01.195250  108877 provision.go:143] copyHostCerts
	I0919 22:56:01.195293  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:01.195331  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:01.195367  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:01.195510  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:01.195659  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:01.195689  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:01.195701  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:01.195740  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:01.195833  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:01.195857  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:01.195864  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:01.195897  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:01.195988  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:01.859275  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:01.859345  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:01.859388  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:01.879176  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:01.914943  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:01.914970  108877 retry.go:31] will retry after 258.363429ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:02.210869  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:02.210991  108877 retry.go:31] will retry after 560.664787ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:02.808203  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:02.808239  108877 retry.go:31] will retry after 561.515443ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:03.405700  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:03.405799  108877 retry.go:31] will retry after 263.782493ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:03.670387  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:03.689156  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:03.724788  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:03.724820  108877 retry.go:31] will retry after 287.070084ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:04.048180  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:04.048218  108877 retry.go:31] will retry after 207.120232ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:04.291310  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:04.291346  108877 retry.go:31] will retry after 757.196129ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:05.086835  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:05.086959  108877 provision.go:87] duration metric: took 3.910440733s to configureAuth
	W0919 22:56:05.086974  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:05.086986  108877 retry.go:31] will retry after 5.223742ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:05.093247  108877 provision.go:84] configureAuth start
	I0919 22:56:05.093377  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:05.113825  108877 provision.go:143] copyHostCerts
	I0919 22:56:05.113865  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:05.113909  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:05.113915  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:05.113970  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:05.114424  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:05.115054  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:05.115087  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:05.115157  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:05.115268  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:05.115294  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:05.115300  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:05.115331  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:05.115412  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:05.404989  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:05.405045  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:05.405078  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:05.422957  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:05.459168  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:05.459197  108877 retry.go:31] will retry after 344.462045ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:05.841287  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:05.841328  108877 retry.go:31] will retry after 542.408002ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:06.419402  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:06.419431  108877 retry.go:31] will retry after 605.017904ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:07.062463  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:07.062547  108877 retry.go:31] will retry after 275.860303ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:07.339003  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:07.356567  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:07.391748  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:07.391780  108877 retry.go:31] will retry after 178.699792ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:07.607876  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:07.607911  108877 retry.go:31] will retry after 375.15091ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:08.018976  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:08.019003  108877 retry.go:31] will retry after 784.188181ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:08.839997  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:08.840145  108877 provision.go:87] duration metric: took 3.746870768s to configureAuth
	W0919 22:56:08.840159  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:08.840169  108877 retry.go:31] will retry after 6.861054ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:08.847426  108877 provision.go:84] configureAuth start
	I0919 22:56:08.847505  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:08.865433  108877 provision.go:143] copyHostCerts
	I0919 22:56:08.865480  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:08.865518  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:08.865527  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:08.865593  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:08.865688  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:08.865715  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:08.865723  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:08.865762  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:08.865831  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:08.865859  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:08.865867  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:08.865899  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:08.865974  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:09.225606  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:09.225675  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:09.225720  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:09.245000  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:09.283542  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:09.283582  108877 retry.go:31] will retry after 143.583579ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:09.463983  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:09.464011  108877 retry.go:31] will retry after 511.26629ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:10.011156  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:10.011188  108877 retry.go:31] will retry after 376.764816ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:10.424314  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:10.424349  108877 retry.go:31] will retry after 819.399589ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:11.279887  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:11.279970  108877 provision.go:87] duration metric: took 2.432521133s to configureAuth
	W0919 22:56:11.279984  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:11.279993  108877 retry.go:31] will retry after 12.318965ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:11.293297  108877 provision.go:84] configureAuth start
	I0919 22:56:11.293408  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:11.311440  108877 provision.go:143] copyHostCerts
	I0919 22:56:11.311481  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:11.311518  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:11.311531  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:11.311593  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:11.311690  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:11.311716  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:11.311727  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:11.311758  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:11.311821  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:11.311848  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:11.311857  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:11.311888  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:11.311956  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:11.580231  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:11.580306  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:11.580350  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:11.599414  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:11.635618  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:11.635650  108877 retry.go:31] will retry after 277.201613ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:11.949314  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:11.949341  108877 retry.go:31] will retry after 274.628798ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:12.261504  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:12.261533  108877 retry.go:31] will retry after 791.765374ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:13.092279  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:13.092350  108877 retry.go:31] will retry after 323.897677ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:13.416868  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:13.437301  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:13.474299  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:13.474337  108877 retry.go:31] will retry after 200.730433ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:13.711949  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:13.711988  108877 retry.go:31] will retry after 539.542496ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:14.289044  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:14.289078  108877 retry.go:31] will retry after 383.679218ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:14.710216  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:14.710308  108877 provision.go:87] duration metric: took 3.416985511s to configureAuth
	W0919 22:56:14.710319  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:14.710331  108877 retry.go:31] will retry after 19.04317ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:14.729514  108877 provision.go:84] configureAuth start
	I0919 22:56:14.729620  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:14.748043  108877 provision.go:143] copyHostCerts
	I0919 22:56:14.748082  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:14.748148  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:14.748161  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:14.748230  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:14.748328  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:14.748367  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:14.748378  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:14.748413  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:14.748479  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:14.748507  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:14.748517  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:14.748546  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:14.748617  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:15.109353  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:15.109409  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:15.109441  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:15.128026  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:15.164949  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:15.164987  108877 retry.go:31] will retry after 172.597249ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:15.374972  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:15.375000  108877 retry.go:31] will retry after 222.185257ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:15.633045  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:15.633082  108877 retry.go:31] will retry after 703.284522ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:16.372656  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:16.372734  108877 retry.go:31] will retry after 261.771317ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:16.635337  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:16.654949  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:16.690945  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:16.690979  108877 retry.go:31] will retry after 300.102808ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:17.027866  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:17.027899  108877 retry.go:31] will retry after 309.831037ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:17.376137  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:17.376168  108877 retry.go:31] will retry after 468.148418ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:17.880961  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:17.880988  108877 retry.go:31] will retry after 684.79805ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:18.603567  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:18.603671  108877 provision.go:87] duration metric: took 3.874130397s to configureAuth
	W0919 22:56:18.603685  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:18.603700  108877 retry.go:31] will retry after 42.064967ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:18.645896  108877 provision.go:84] configureAuth start
	I0919 22:56:18.646008  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:18.665460  108877 provision.go:143] copyHostCerts
	I0919 22:56:18.665495  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:18.665529  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:18.665539  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:18.665594  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:18.665668  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:18.665686  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:18.665693  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:18.665713  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:18.665754  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:18.665771  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:18.665777  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:18.665797  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:18.665844  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:19.242094  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:19.242156  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:19.242191  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:19.260155  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:19.296012  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:19.296038  108877 retry.go:31] will retry after 245.481119ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:19.578197  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:19.578231  108877 retry.go:31] will retry after 268.274354ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:19.882353  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:19.882415  108877 retry.go:31] will retry after 563.481155ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:20.482263  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:20.482363  108877 retry.go:31] will retry after 188.022762ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:20.670631  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:20.690671  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:20.726599  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:20.726629  108877 retry.go:31] will retry after 132.052233ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:20.894470  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:20.894501  108877 retry.go:31] will retry after 333.068816ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:21.263912  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:21.263937  108877 retry.go:31] will retry after 616.384688ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:21.917331  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:21.917427  108877 provision.go:87] duration metric: took 3.271503829s to configureAuth
	W0919 22:56:21.917439  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:21.917451  108877 retry.go:31] will retry after 63.141944ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:21.980683  108877 provision.go:84] configureAuth start
	I0919 22:56:21.980783  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:21.997490  108877 provision.go:143] copyHostCerts
	I0919 22:56:21.997546  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:21.997591  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:21.997601  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:21.997674  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:21.997779  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:21.997809  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:21.997816  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:21.997849  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:21.997918  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:21.997947  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:21.997956  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:21.997986  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:21.998059  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:22.147518  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:22.147575  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:22.147622  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:22.166129  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:22.203176  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:22.203206  108877 retry.go:31] will retry after 355.464116ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:22.595615  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:22.595643  108877 retry.go:31] will retry after 381.375504ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:23.013375  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:23.013405  108877 retry.go:31] will retry after 485.129276ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:23.533999  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:23.534064  108877 retry.go:31] will retry after 259.478636ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:23.794591  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:23.813276  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:23.848854  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:23.848883  108877 retry.go:31] will retry after 136.979108ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:24.022487  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:24.022517  108877 retry.go:31] will retry after 430.182854ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:24.489381  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:24.489421  108877 retry.go:31] will retry after 440.378545ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:24.966182  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:24.966213  108877 retry.go:31] will retry after 570.593495ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:25.572888  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:25.572980  108877 provision.go:87] duration metric: took 3.592258128s to configureAuth
	W0919 22:56:25.572991  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:25.573002  108877 retry.go:31] will retry after 80.275673ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:25.654286  108877 provision.go:84] configureAuth start
	I0919 22:56:25.654397  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:25.673356  108877 provision.go:143] copyHostCerts
	I0919 22:56:25.673394  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:25.673430  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:25.673441  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:25.673503  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:25.673583  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:25.673602  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:25.673609  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:25.673633  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:25.673708  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:25.673726  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:25.673732  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:25.673750  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:25.673798  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:25.978732  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:25.978789  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:25.978821  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:25.998793  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:26.035722  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:26.035752  108877 retry.go:31] will retry after 185.817603ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:26.258692  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:26.258726  108877 retry.go:31] will retry after 366.478539ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:26.662736  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:26.662770  108877 retry.go:31] will retry after 737.24048ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:27.436960  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:27.437068  108877 retry.go:31] will retry after 357.474232ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:27.794679  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:27.812988  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:27.848661  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:27.848697  108877 retry.go:31] will retry after 227.065335ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:28.113046  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:28.113086  108877 retry.go:31] will retry after 331.805613ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:28.482729  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:28.482755  108877 retry.go:31] will retry after 457.757799ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:28.977064  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:28.977208  108877 provision.go:87] duration metric: took 3.322888473s to configureAuth
	W0919 22:56:28.977225  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:28.977238  108877 retry.go:31] will retry after 82.927245ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:29.060500  108877 provision.go:84] configureAuth start
	I0919 22:56:29.060615  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:29.079194  108877 provision.go:143] copyHostCerts
	I0919 22:56:29.079237  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:29.079276  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:29.079288  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:29.079351  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:29.079454  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:29.079480  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:29.079488  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:29.079525  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:29.079599  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:29.079623  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:29.079631  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:29.079664  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:29.079736  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:29.134695  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:29.134761  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:29.134810  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:29.152678  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:29.188254  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:29.188282  108877 retry.go:31] will retry after 137.720284ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:29.363383  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:29.363416  108877 retry.go:31] will retry after 506.726285ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:29.908847  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:29.908880  108877 retry.go:31] will retry after 411.304777ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:30.355704  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:30.355793  108877 retry.go:31] will retry after 203.717987ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:30.560235  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:30.578622  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:30.616921  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:30.616952  108877 retry.go:31] will retry after 370.771171ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:31.025652  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:31.025682  108877 retry.go:31] will retry after 362.677663ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:31.426077  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:31.426132  108877 retry.go:31] will retry after 441.8947ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:31.904914  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:31.904994  108877 provision.go:87] duration metric: took 2.844469676s to configureAuth
	W0919 22:56:31.905001  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:31.905011  108877 retry.go:31] will retry after 102.648658ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:32.008362  108877 provision.go:84] configureAuth start
	I0919 22:56:32.008479  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:32.026977  108877 provision.go:143] copyHostCerts
	I0919 22:56:32.027012  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:32.027044  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:32.027054  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:32.027121  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:32.027216  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:32.027240  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:32.027244  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:32.027266  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:32.027319  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:32.027335  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:32.027339  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:32.027361  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:32.027437  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:32.395029  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:32.395089  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:32.395137  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:32.413735  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:32.449599  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:32.449631  108877 retry.go:31] will retry after 238.059442ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:32.724337  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:32.724367  108877 retry.go:31] will retry after 445.437522ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:33.205585  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:33.205623  108877 retry.go:31] will retry after 605.339888ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:33.847039  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:33.847151  108877 retry.go:31] will retry after 217.437844ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:34.065727  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:34.084461  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:34.121069  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:34.121144  108877 retry.go:31] will retry after 191.153871ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:34.347528  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:34.347617  108877 retry.go:31] will retry after 310.100528ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:34.694764  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:34.694791  108877 retry.go:31] will retry after 336.844738ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:35.068059  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:35.068095  108877 retry.go:31] will retry after 778.88836ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:35.885735  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:35.885829  108877 provision.go:87] duration metric: took 3.877417139s to configureAuth
	W0919 22:56:35.885839  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:35.885851  108877 retry.go:31] will retry after 310.258288ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:36.196298  108877 provision.go:84] configureAuth start
	I0919 22:56:36.196405  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:36.216801  108877 provision.go:143] copyHostCerts
	I0919 22:56:36.216840  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:36.216869  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:36.216878  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:36.216935  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:36.217042  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:36.217081  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:36.217086  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:36.217132  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:36.217198  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:36.217218  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:36.217225  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:36.217246  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:36.217299  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:36.911886  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:36.911947  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:36.911991  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:36.930148  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:36.965855  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:36.965887  108877 retry.go:31] will retry after 268.589558ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:37.271625  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:37.271657  108877 retry.go:31] will retry after 479.678948ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:37.788516  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:37.788543  108877 retry.go:31] will retry after 402.18824ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:38.227194  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:38.227284  108877 retry.go:31] will retry after 224.738673ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:38.452790  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:38.471469  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:38.507319  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:38.507351  108877 retry.go:31] will retry after 240.712716ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:38.784559  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:38.784596  108877 retry.go:31] will retry after 538.694984ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:39.360038  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:39.360067  108877 retry.go:31] will retry after 536.342982ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:39.932339  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:39.932422  108877 provision.go:87] duration metric: took 3.736097795s to configureAuth
	W0919 22:56:39.932430  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:39.932443  108877 retry.go:31] will retry after 206.453606ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:40.139916  108877 provision.go:84] configureAuth start
	I0919 22:56:40.140025  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:40.159279  108877 provision.go:143] copyHostCerts
	I0919 22:56:40.159324  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:40.159368  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:40.159381  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:40.159448  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:40.159547  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:40.159573  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:40.159581  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:40.159617  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:40.159717  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:40.159742  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:40.159750  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:40.159784  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:40.159858  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:40.276670  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:40.276739  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:40.276783  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:40.297504  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:40.334785  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:40.334822  108877 retry.go:31] will retry after 328.004509ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:40.701136  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:40.701168  108877 retry.go:31] will retry after 413.032497ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:41.151037  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:41.151097  108877 retry.go:31] will retry after 823.289324ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:42.010820  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:42.010916  108877 provision.go:87] duration metric: took 1.870966844s to configureAuth
	W0919 22:56:42.010931  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:42.010950  108877 retry.go:31] will retry after 488.057311ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:42.499593  108877 provision.go:84] configureAuth start
	I0919 22:56:42.499692  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:42.517980  108877 provision.go:143] copyHostCerts
	I0919 22:56:42.518015  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:42.518052  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:42.518058  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:42.518129  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:42.518224  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:42.518244  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:42.518249  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:42.518271  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:42.518325  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:42.518342  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:42.518345  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:42.518366  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:42.518417  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:42.823337  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:42.823395  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:42.823438  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:42.841811  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:42.877778  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:42.877819  108877 retry.go:31] will retry after 298.649157ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:43.212922  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:43.212958  108877 retry.go:31] will retry after 522.015069ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:43.771555  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:43.771589  108877 retry.go:31] will retry after 664.326257ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:44.472134  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:44.472221  108877 retry.go:31] will retry after 153.745574ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:44.626669  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:44.645720  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:44.681791  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:44.681823  108877 retry.go:31] will retry after 365.465122ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:45.084885  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:45.084914  108877 retry.go:31] will retry after 466.75968ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:45.589343  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:45.589390  108877 retry.go:31] will retry after 488.601857ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:46.115089  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:46.115225  108877 provision.go:87] duration metric: took 3.615609417s to configureAuth
	W0919 22:56:46.115233  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:46.115249  108877 retry.go:31] will retry after 754.938625ms: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:46.871274  108877 provision.go:84] configureAuth start
	I0919 22:56:46.871388  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:46.889941  108877 provision.go:143] copyHostCerts
	I0919 22:56:46.889990  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:46.890037  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:46.890050  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:46.890160  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:46.890269  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:46.890296  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:46.890304  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:46.890360  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:46.890434  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:46.890459  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:46.890469  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:46.890499  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:46.890572  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:46.997796  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:46.997867  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:46.997912  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:47.017254  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:47.054744  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:47.054778  108877 retry.go:31] will retry after 308.508878ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:47.400043  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:47.400080  108877 retry.go:31] will retry after 493.608013ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:47.930962  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:47.930992  108877 retry.go:31] will retry after 488.73635ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:48.456395  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:48.456470  108877 retry.go:31] will retry after 197.32939ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:48.654934  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:48.674211  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:48.710143  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:48.710175  108877 retry.go:31] will retry after 134.018657ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:48.879983  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:48.880019  108877 retry.go:31] will retry after 327.178794ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:49.243596  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:49.243627  108877 retry.go:31] will retry after 696.883564ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:49.978365  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:49.978446  108877 provision.go:87] duration metric: took 3.10712947s to configureAuth
	W0919 22:56:49.978452  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:49.978461  108877 retry.go:31] will retry after 1.108872523s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:51.087560  108877 provision.go:84] configureAuth start
	I0919 22:56:51.087642  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:51.106657  108877 provision.go:143] copyHostCerts
	I0919 22:56:51.106704  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:51.106742  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:51.106755  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:51.106824  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:51.106932  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:51.106959  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:51.106965  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:51.106999  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:51.107073  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:51.107116  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:51.107123  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:51.107158  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:51.107241  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:51.139574  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:51.139642  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:51.139689  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:51.158649  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:51.195066  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:51.195097  108877 retry.go:31] will retry after 362.143833ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:51.594416  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:51.594447  108877 retry.go:31] will retry after 303.523109ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:51.934745  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:51.934770  108877 retry.go:31] will retry after 543.851524ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:52.515882  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:52.515974  108877 retry.go:31] will retry after 322.599797ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:52.839665  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:52.861040  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:52.897445  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:52.897480  108877 retry.go:31] will retry after 148.171313ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:53.082549  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:53.082578  108877 retry.go:31] will retry after 259.258531ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:53.377992  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:53.378028  108877 retry.go:31] will retry after 736.784844ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:54.152006  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:54.152129  108877 provision.go:87] duration metric: took 3.064543662s to configureAuth
	W0919 22:56:54.152144  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:54.152162  108877 retry.go:31] will retry after 2.515449118s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:56.669831  108877 provision.go:84] configureAuth start
	I0919 22:56:56.670043  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:56:56.688740  108877 provision.go:143] copyHostCerts
	I0919 22:56:56.688785  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:56.688823  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:56:56.688836  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:56:56.688903  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:56:56.689008  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:56.689034  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:56:56.689038  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:56:56.689070  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:56:56.689192  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:56.689224  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:56:56.689237  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:56:56.689269  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:56:56.689352  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:56:57.015996  108877 provision.go:177] copyRemoteCerts
	I0919 22:56:57.016051  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:56:57.016137  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:57.034711  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:57.070767  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:57.070801  108877 retry.go:31] will retry after 268.964622ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:57.376240  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:57.376285  108877 retry.go:31] will retry after 515.618696ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:57.928822  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:57.928857  108877 retry.go:31] will retry after 709.3811ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:58.674783  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:58.674856  108877 retry.go:31] will retry after 326.321162ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:59.001369  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:56:59.019209  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:56:59.055625  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:59.055653  108877 retry.go:31] will retry after 129.805557ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:59.222051  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:59.222084  108877 retry.go:31] will retry after 547.397983ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:56:59.805545  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:56:59.805581  108877 retry.go:31] will retry after 688.131924ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:00.530240  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:00.530347  108877 provision.go:87] duration metric: took 3.860436584s to configureAuth
	W0919 22:57:00.530368  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:00.530382  108877 retry.go:31] will retry after 3.473490773s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:04.005067  108877 provision.go:84] configureAuth start
	I0919 22:57:04.005190  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:57:04.022589  108877 provision.go:143] copyHostCerts
	I0919 22:57:04.022626  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:04.022653  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:57:04.022659  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:04.022725  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:57:04.022798  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:04.022819  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:57:04.022824  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:04.022844  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:57:04.022887  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:04.022903  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:57:04.022908  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:04.022926  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:57:04.022998  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:57:04.433055  108877 provision.go:177] copyRemoteCerts
	I0919 22:57:04.433134  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:57:04.433169  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:04.452162  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:04.487790  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:04.487816  108877 retry.go:31] will retry after 301.604842ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:04.826348  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:04.826384  108877 retry.go:31] will retry after 320.796627ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:05.183582  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:05.183617  108877 retry.go:31] will retry after 607.690423ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:05.826718  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:05.826781  108877 retry.go:31] will retry after 374.651417ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:06.202474  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:06.220929  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:06.258097  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:06.258150  108877 retry.go:31] will retry after 183.921318ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:06.478404  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:06.478436  108877 retry.go:31] will retry after 368.414927ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:06.883316  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:06.883350  108877 retry.go:31] will retry after 514.052172ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:07.434181  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:07.434210  108877 retry.go:31] will retry after 595.491046ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:08.065650  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:08.065740  108877 provision.go:87] duration metric: took 4.060647903s to configureAuth
	W0919 22:57:08.065753  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:08.065765  108877 retry.go:31] will retry after 2.793620534s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:10.859931  108877 provision.go:84] configureAuth start
	I0919 22:57:10.860020  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:57:10.877832  108877 provision.go:143] copyHostCerts
	I0919 22:57:10.877873  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:10.877909  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:57:10.877923  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:10.877991  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:57:10.878141  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:10.878173  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:57:10.878181  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:10.878215  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:57:10.878285  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:10.878311  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:57:10.878321  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:10.878351  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:57:10.878423  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:57:10.984390  108877 provision.go:177] copyRemoteCerts
	I0919 22:57:10.984447  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:57:10.984480  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:11.003216  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:11.038380  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:11.038425  108877 retry.go:31] will retry after 370.890016ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:11.445998  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:11.446033  108877 retry.go:31] will retry after 188.555467ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:11.671096  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:11.671146  108877 retry.go:31] will retry after 817.050629ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:12.525157  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:12.525243  108877 retry.go:31] will retry after 306.251712ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:12.831810  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:12.849689  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:12.885775  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:12.885803  108877 retry.go:31] will retry after 132.37261ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:13.055528  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:13.055563  108877 retry.go:31] will retry after 238.491118ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:13.330205  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:13.330240  108877 retry.go:31] will retry after 464.873837ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:13.831628  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:13.831673  108877 retry.go:31] will retry after 494.104964ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:14.362527  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:14.362621  108877 provision.go:87] duration metric: took 3.502663397s to configureAuth
	W0919 22:57:14.362636  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:14.362646  108877 retry.go:31] will retry after 3.171081362s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:17.533852  108877 provision.go:84] configureAuth start
	I0919 22:57:17.533970  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:57:17.553677  108877 provision.go:143] copyHostCerts
	I0919 22:57:17.553714  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:17.553749  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:57:17.553761  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:17.553840  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:57:17.553935  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:17.553961  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:57:17.553968  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:17.553998  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:57:17.554058  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:17.554084  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:57:17.554090  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:17.554163  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:57:17.554245  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:57:17.842271  108877 provision.go:177] copyRemoteCerts
	I0919 22:57:17.842335  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:57:17.842369  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:17.860493  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:17.896364  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:17.896395  108877 retry.go:31] will retry after 245.526695ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:18.178923  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:18.178957  108877 retry.go:31] will retry after 291.474893ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:18.506844  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:18.506893  108877 retry.go:31] will retry after 428.15725ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:18.971538  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:18.971609  108877 retry.go:31] will retry after 328.173688ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:19.300150  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:19.318702  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:19.355566  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:19.355602  108877 retry.go:31] will retry after 195.443544ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:19.588029  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:19.588064  108877 retry.go:31] will retry after 197.002623ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:19.820782  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:19.820815  108877 retry.go:31] will retry after 306.66473ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:20.163931  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:20.164025  108877 provision.go:87] duration metric: took 2.630147192s to configureAuth
	W0919 22:57:20.164039  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:20.164057  108877 retry.go:31] will retry after 5.88081309s: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:26.047184  108877 provision.go:84] configureAuth start
	I0919 22:57:26.047287  108877 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-984158-m04
	I0919 22:57:26.066549  108877 provision.go:143] copyHostCerts
	I0919 22:57:26.066588  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:26.066631  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 22:57:26.066646  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 22:57:26.066714  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 22:57:26.066812  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:26.066839  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 22:57:26.066851  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 22:57:26.066885  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 22:57:26.066949  108877 vm_assets.go:164] NewFileAsset: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:26.066974  108877 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 22:57:26.066984  108877 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 22:57:26.067013  108877 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 22:57:26.067083  108877 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.ha-984158-m04 san=[127.0.0.1 192.168.49.5 ha-984158-m04 localhost minikube]
	I0919 22:57:26.430292  108877 provision.go:177] copyRemoteCerts
	I0919 22:57:26.430358  108877 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 22:57:26.430413  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:26.448874  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:26.485062  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:26.485093  108877 retry.go:31] will retry after 343.157141ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:26.863852  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:26.863899  108877 retry.go:31] will retry after 287.302046ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:27.186803  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:27.186834  108877 retry.go:31] will retry after 756.208988ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:27.979672  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:27.979754  108877 retry.go:31] will retry after 357.114937ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:28.337288  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:28.359209  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:28.395795  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:28.395827  108877 retry.go:31] will retry after 334.191783ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:28.765402  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:28.765435  108877 retry.go:31] will retry after 479.582515ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:29.282486  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:29.282516  108877 retry.go:31] will retry after 731.889055ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:30.052091  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.052209  108877 provision.go:87] duration metric: took 4.00499904s to configureAuth
	W0919 22:57:30.052219  108877 ubuntu.go:193] configureAuth failed: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.052233  108877 ubuntu.go:202] Error configuring auth during provisioning Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.052243  108877 machine.go:96] duration metric: took 11m1.1194403s to provisionDockerMachine
	I0919 22:57:30.052319  108877 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 22:57:30.052364  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:30.072494  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:30.108866  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.108893  108877 retry.go:31] will retry after 233.851556ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:30.378888  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.378916  108877 retry.go:31] will retry after 336.456758ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:30.752888  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:30.752921  108877 retry.go:31] will retry after 321.92269ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:31.112464  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:31.112493  108877 retry.go:31] will retry after 649.982973ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:31.801129  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:31.801197  108877 retry.go:31] will retry after 218.292036ms: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:32.020708  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:32.039859  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:32.075888  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:32.075943  108877 retry.go:31] will retry after 192.036574ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:32.306777  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:32.306815  108877 retry.go:31] will retry after 210.414159ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:32.556133  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:32.556165  108877 retry.go:31] will retry after 739.62039ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:33.331746  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:33.331819  108877 start.go:268] error running df -h /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:33.331833  108877 start.go:235] error getting percentage of /var that is free: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:33.331892  108877 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 22:57:33.331942  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:33.350393  108877 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32848 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/ha-984158-m04/id_rsa Username:docker}
	W0919 22:57:33.386406  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:33.386434  108877 retry.go:31] will retry after 349.776959ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:33.772275  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:33.772308  108877 retry.go:31] will retry after 325.543128ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:34.135049  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:34.135160  108877 retry.go:31] will retry after 409.049881ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:34.579989  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:34.580036  108877 retry.go:31] will retry after 621.130338ms: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:35.237720  108877 sshutil.go:64] dial failure (will retry): ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:35.237802  108877 start.go:283] error running df -BG /var: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:35.237833  108877 start.go:240] error getting GiB of /var that is available: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:35.237839  108877 fix.go:56] duration metric: took 11m6.646308817s for fixHost
	I0919 22:57:35.237846  108877 start.go:83] releasing machines lock for "ha-984158-m04", held for 11m6.646337997s
	W0919 22:57:35.237863  108877 start.go:714] error starting host: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	W0919 22:57:35.237942  108877 out.go:285] ! StartHost failed, but will try again: provision: Temporary Error: NewSession: new client: new client: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:35.237965  108877 start.go:729] Will try again in 5 seconds ...
	I0919 22:57:40.239023  108877 start.go:360] acquireMachinesLock for ha-984158-m04: {Name:mk93a1ca09bde0753cfe8f658686e6c04601194f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 22:57:40.239172  108877 start.go:364] duration metric: took 81.107µs to acquireMachinesLock for "ha-984158-m04"
	I0919 22:57:40.239194  108877 start.go:96] Skipping create...Using existing machine configuration
	I0919 22:57:40.239201  108877 fix.go:54] fixHost starting: m04
	I0919 22:57:40.239431  108877 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:57:40.257713  108877 fix.go:112] recreateIfNeeded on ha-984158-m04: state=Running err=<nil>
	W0919 22:57:40.257736  108877 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 22:57:40.259573  108877 out.go:252] * Updating the running docker "ha-984158-m04" container ...
	I0919 22:57:40.259646  108877 machine.go:93] provisionDockerMachine start ...
	I0919 22:57:40.259712  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 22:57:40.278585  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 22:57:40.278817  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 22:57:40.278833  108877 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 22:57:40.315069  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:43.351146  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:46.388339  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:49.426746  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:52.463707  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:55.500573  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:57:58.538182  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:01.575927  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:04.616375  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:07.653326  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:10.690229  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:13.728885  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:16.768560  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:19.806622  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:22.842755  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:25.881701  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:28.917980  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:31.955190  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:34.992919  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:38.030446  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:41.067474  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:44.104421  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:47.142056  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:50.180294  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:53.217514  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:56.255024  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:58:59.292319  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:02.329219  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:05.366989  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:08.402945  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:11.439816  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:14.476386  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:17.513513  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:20.549641  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:23.586144  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:26.623276  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:29.660785  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:32.697636  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:35.735863  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:38.774479  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:41.811818  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:44.850018  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:47.887261  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:50.924246  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:53.961078  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 22:59:56.999866  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:00.037067  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:03.074676  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:06.113750  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:09.151270  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:12.189380  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:15.227164  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:18.263925  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:21.301513  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:24.339191  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:27.375639  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:30.410883  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:33.448495  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:36.487617  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:39.525454  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:42.525653  108877 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:00:42.525702  108877 ubuntu.go:182] provisioning hostname "ha-984158-m04"
	I0919 23:00:42.525804  108877 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-984158-m04
	I0919 23:00:42.546781  108877 main.go:141] libmachine: Using SSH client type: native
	I0919 23:00:42.547011  108877 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 32848 <nil> <nil>}
	I0919 23:00:42.547024  108877 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-984158-m04 && echo "ha-984158-m04" | sudo tee /etc/hostname
	I0919 23:00:42.582767  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:45.622025  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:48.658598  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:51.696578  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:54.735790  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:00:57.772254  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:00.809145  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:03.847360  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:06.886611  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:09.924681  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:12.962276  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:16.000899  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:19.036953  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:22.074167  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:25.113341  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:28.150651  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:31.187163  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:34.225742  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:37.261917  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:40.297809  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:43.333952  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:46.372525  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:49.410324  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:52.446487  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:55.484663  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:01:58.522655  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:01.563288  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:04.604701  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:07.641452  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:10.678188  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:13.715164  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:16.755096  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:19.793467  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:22.831053  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:25.869043  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:28.905456  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:31.942385  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:34.980828  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:38.019484  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:41.055921  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:44.092932  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:47.133154  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:50.170708  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:53.207283  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:56.245651  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:02:59.283000  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:03:02.320057  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:03:05.356723  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:03:08.393190  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:03:11.429671  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0919 23:03:14.469188  108877 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	
	
	==> CRI-O <==
	Sep 19 22:46:25 ha-984158 crio[559]: time="2025-09-19 22:46:25.542115954Z" level=info msg="Started container" PID=1328 containerID=7f27c440e476282af7cf3b827db8434ab8e100001b063e76b7575e1f7344eafb description=default/busybox-7b57f96db7-rnjl7/busybox id=5af4c15d-ffeb-4386-bbef-261b30fb0729 name=/runtime.v1.RuntimeService/StartContainer sandboxID=686ec2c48be40cf19861ba203d7c998afcdd12a5165aeaf4ce884d0c39d43fd8
	Sep 19 22:46:25 ha-984158 crio[559]: time="2025-09-19 22:46:25.552790822Z" level=info msg="Started container" PID=1355 containerID=fedbb07b60877e1b2cd5d959e25a0cf4ee0dd50a5bab5aac84e7b5b754eae209 description=kube-system/kindnet-rd882/kindnet-cni id=c8fa5578-2fab-4a4b-9585-0a286f3b721a name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc6bb81a280c172b8fd4ebc9686f2558615d2320191c94c6ca365308355a0ab1
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.896915106Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": CREATE"
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.901460542Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.901492106Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.901508729Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": WRITE"
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.905440351Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.905470526Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.905482668Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist.temp\": RENAME"
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.909438037Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.909469710Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.909484118Z" level=info msg="CNI monitoring event \"/etc/cni/net.d/10-kindnet.conflist\": CREATE"
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.913289746Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Sep 19 22:46:35 ha-984158 crio[559]: time="2025-09-19 22:46:35.913327781Z" level=info msg="Updated default CNI network name to kindnet"
	Sep 19 22:46:56 ha-984158 crio[559]: time="2025-09-19 22:46:56.081059415Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1ebc57d5-a740-47d9-9987-4e60e221001c name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:46:56 ha-984158 crio[559]: time="2025-09-19 22:46:56.081334313Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=1ebc57d5-a740-47d9-9987-4e60e221001c name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:46:56 ha-984158 crio[559]: time="2025-09-19 22:46:56.082014285Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=21087041-472e-4747-8a19-5cc9b9f8a4cf name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:46:56 ha-984158 crio[559]: time="2025-09-19 22:46:56.082263997Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=21087041-472e-4747-8a19-5cc9b9f8a4cf name=/runtime.v1.ImageService/ImageStatus
	Sep 19 22:46:56 ha-984158 crio[559]: time="2025-09-19 22:46:56.083175100Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=6853401c-2303-47b4-9906-bd3084a6d579 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:46:56 ha-984158 crio[559]: time="2025-09-19 22:46:56.083296583Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 22:46:56 ha-984158 crio[559]: time="2025-09-19 22:46:56.095430340Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/86923bcd8525df4dd21bbb799be79863cfad97a9b0da91409a1255398e8fdef7/merged/etc/passwd: no such file or directory"
	Sep 19 22:46:56 ha-984158 crio[559]: time="2025-09-19 22:46:56.095619433Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/86923bcd8525df4dd21bbb799be79863cfad97a9b0da91409a1255398e8fdef7/merged/etc/group: no such file or directory"
	Sep 19 22:46:56 ha-984158 crio[559]: time="2025-09-19 22:46:56.153282841Z" level=info msg="Created container afb93cd340c1a2c2c23fb734c138555f3e8faaa1f7e37c603ea67157c885c59e: kube-system/storage-provisioner/storage-provisioner" id=6853401c-2303-47b4-9906-bd3084a6d579 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 22:46:56 ha-984158 crio[559]: time="2025-09-19 22:46:56.153984164Z" level=info msg="Starting container: afb93cd340c1a2c2c23fb734c138555f3e8faaa1f7e37c603ea67157c885c59e" id=efb32bf1-35e8-4c84-b8f6-cfe1f1e2d6aa name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 22:46:56 ha-984158 crio[559]: time="2025-09-19 22:46:56.161032963Z" level=info msg="Started container" PID=1719 containerID=afb93cd340c1a2c2c23fb734c138555f3e8faaa1f7e37c603ea67157c885c59e description=kube-system/storage-provisioner/storage-provisioner id=efb32bf1-35e8-4c84-b8f6-cfe1f1e2d6aa name=/runtime.v1.RuntimeService/StartContainer sandboxID=a0ce0c4fb975ec85acbd0483511c664e8509df90a23573218e3c55030246d3bf
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	afb93cd340c1a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       5                   a0ce0c4fb975e       storage-provisioner
	fedbb07b60877       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   16 minutes ago      Running             kindnet-cni               2                   dc6bb81a280c1       kindnet-rd882
	38154cc32f77e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 minutes ago      Running             coredns                   2                   ef30b3aa9fb11       coredns-66bc5c9577-ltjmz
	b7379b6da6735       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce   16 minutes ago      Running             kube-proxy                2                   62536f3daf784       kube-proxy-hdxxn
	7f27c440e4762       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a   16 minutes ago      Running             busybox                   2                   686ec2c48be40       busybox-7b57f96db7-rnjl7
	82fffa736bbae       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   16 minutes ago      Running             coredns                   2                   0ef5f4b6a7632       coredns-66bc5c9577-5gnbx
	bd02142757c9d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Exited              storage-provisioner       4                   a0ce0c4fb975e       storage-provisioner
	965fe458c9ef0       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90   17 minutes ago      Running             kube-apiserver            2                   7a2352c9ba15d       kube-apiserver-ha-984158
	59d5671e1926f       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc   17 minutes ago      Running             kube-scheduler            2                   b17e0b9c519a3       kube-scheduler-ha-984158
	e17c7da10f5f3       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634   17 minutes ago      Running             kube-controller-manager   2                   de818e09e70e2       kube-controller-manager-ha-984158
	28f33c04301b2       765655ea6078171c416896d7cc155c1263a0411d30caaa03d7365aecb99fdf23   17 minutes ago      Running             kube-vip                  2                   d6df5b205fc00       kube-vip-ha-984158
	8a1413fdbad44       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   17 minutes ago      Running             etcd                      2                   aca77eab19534       etcd-ha-984158
	
	
	==> coredns [38154cc32f77e307df53f987d25863457cdfcaa7ab6981fde24fb82bc9cd7f9f] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:53490 - 27495 "HINFO IN 4894416525253804170.3916155572095975355. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.032002652s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> coredns [82fffa736bbaeaeb04b07289ced342ae70616da65571915c1c3daaf112d045ea] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:51529 - 12035 "HINFO IN 4645800045069410117.2835520786339847134. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.043479488s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               ha-984158
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T22_33_44_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:33:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:03:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:01:03 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:01:03 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:01:03 +0000   Fri, 19 Sep 2025 22:33:38 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:01:03 +0000   Fri, 19 Sep 2025 22:33:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    ha-984158
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 84ce9e8c576b4411b4de27c9fa41c3cb
	  System UUID:                e5418393-d7bf-429a-8ff0-9daee26920dd
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-rnjl7             0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 coredns-66bc5c9577-5gnbx             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 coredns-66bc5c9577-ltjmz             100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     29m
	  kube-system                 etcd-ha-984158                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-rd882                        100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-984158             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-984158    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-hdxxn                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-984158             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-984158                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             290Mi (0%)  390Mi (1%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 24m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x8 over 29m)  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29m                node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  NodeReady                29m                kubelet          Node ha-984158 status is now: NodeReady
	  Normal  RegisteredNode           29m                node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           28m                node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           26m                node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     24m (x8 over 24m)  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24m                node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           24m                node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           24m                node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-984158 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-984158 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node ha-984158 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-984158 event: Registered Node ha-984158 in Controller
	
	
	Name:               ha-984158-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-984158-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=ha-984158
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_09_19T22_34_15_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 22:34:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-984158-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:03:14 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 22:58:18 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 22:58:18 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 22:58:18 +0000   Fri, 19 Sep 2025 22:34:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 22:58:18 +0000   Fri, 19 Sep 2025 22:34:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.3
	  Hostname:    ha-984158-m02
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 d3e3fe8d84474b71b3a63c42700435cc
	  System UUID:                370c0cbf-a33c-464e-aad2-0ef3d76b4ebb
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-7b57f96db7-8s7jn                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         28m
	  kube-system                 etcd-ha-984158-m02                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29m
	  kube-system                 kindnet-th979                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      29m
	  kube-system                 kube-apiserver-ha-984158-m02             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-controller-manager-ha-984158-m02    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-proxy-plrn2                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-scheduler-ha-984158-m02             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29m
	  kube-system                 kube-vip-ha-984158-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24m                kube-proxy       
	  Normal  Starting                 29m                kube-proxy       
	  Normal  RegisteredNode           29m                node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           29m                node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           28m                node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  Starting                 26m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     26m (x8 over 26m)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    26m (x8 over 26m)  kubelet          Node ha-984158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  26m (x8 over 26m)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           26m                node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  NodeHasSufficientMemory  24m (x8 over 24m)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientMemory
	  Normal  Starting                 24m                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    24m (x8 over 24m)  kubelet          Node ha-984158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x8 over 24m)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24m                node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           24m                node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           24m                node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  Starting                 17m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  17m (x8 over 17m)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    17m (x8 over 17m)  kubelet          Node ha-984158-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     17m (x8 over 17m)  kubelet          Node ha-984158-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16m                node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-984158-m02 event: Registered Node ha-984158-m02 in Controller
	
	
	==> dmesg <==
	[  +0.103037] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.029723] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +8.096733] kauditd_printk_skb: 47 callbacks suppressed
	[Sep19 22:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000018] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.041768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.022949] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023846] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023948] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000042] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +1.023825] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	
	
	==> etcd [8a1413fdbad4478ef7c6a59abe39a8ae23ba5a5026c3643601daff05c92d85d2] <==
	{"level":"warn","ts":"2025-09-19T22:46:23.255218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.262283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51836","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.270258Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51870","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.278640Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51908","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.286701Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.294824Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.302438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.309301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52030","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.317968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.325664Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.333056Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52148","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.341057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.348817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52200","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.356251Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.363906Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.380119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52328","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.388350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52338","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.398276Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T22:46:23.455192Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52382","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T22:56:22.892391Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":3613}
	{"level":"info","ts":"2025-09-19T22:56:22.945936Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":3613,"took":"52.918312ms","hash":2134618105,"current-db-size-bytes":7262208,"current-db-size":"7.3 MB","current-db-size-in-use-bytes":2560000,"current-db-size-in-use":"2.6 MB"}
	{"level":"info","ts":"2025-09-19T22:56:22.945995Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":2134618105,"revision":3613,"compact-revision":-1}
	{"level":"info","ts":"2025-09-19T23:01:22.898671Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":4242}
	{"level":"info","ts":"2025-09-19T23:01:22.916356Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":4242,"took":"17.253627ms","hash":746538360,"current-db-size-bytes":7262208,"current-db-size":"7.3 MB","current-db-size-in-use-bytes":2183168,"current-db-size-in-use":"2.2 MB"}
	{"level":"info","ts":"2025-09-19T23:01:22.916416Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":746538360,"revision":4242,"compact-revision":3613}
	
	
	==> kernel <==
	 23:03:21 up  1:45,  0 users,  load average: 0.24, 0.32, 0.42
	Linux ha-984158 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [fedbb07b60877e1b2cd5d959e25a0cf4ee0dd50a5bab5aac84e7b5b754eae209] <==
	I0919 23:02:15.897447       1 main.go:301] handling current node
	I0919 23:02:25.903209       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 23:02:25.903254       1 main.go:301] handling current node
	I0919 23:02:25.903274       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 23:02:25.903281       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 23:02:35.895993       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 23:02:35.896067       1 main.go:301] handling current node
	I0919 23:02:35.896085       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 23:02:35.896093       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 23:02:45.896313       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 23:02:45.896353       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 23:02:45.896543       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 23:02:45.896555       1 main.go:301] handling current node
	I0919 23:02:55.904216       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 23:02:55.904257       1 main.go:301] handling current node
	I0919 23:02:55.904282       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 23:02:55.904291       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 23:03:05.896508       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 23:03:05.896556       1 main.go:301] handling current node
	I0919 23:03:05.896578       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 23:03:05.896586       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	I0919 23:03:15.895624       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I0919 23:03:15.896333       1 main.go:301] handling current node
	I0919 23:03:15.896448       1 main.go:297] Handling node with IPs: map[192.168.49.3:{}]
	I0919 23:03:15.896645       1 main.go:324] Node ha-984158-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [965fe458c9ef0bcf4be38ed775e73a5039de6789a97db59821b4314d727cd461] <==
	I0919 22:48:44.820138       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:48:56.008288       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:49:58.739539       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:49:58.780802       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:51:18.583537       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:51:23.361381       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:52:19.182932       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:52:30.489561       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:53:21.106392       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:53:47.681438       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:54:37.087800       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:54:49.987670       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:55:49.426875       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:55:51.759725       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:56:23.975241       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I0919 22:57:08.847192       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:57:17.022635       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:58:20.401066       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:58:43.762951       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:59:42.360553       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 22:59:58.258720       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:00:47.403487       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:01:25.122478       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:02:04.826673       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	I0919 23:02:53.645399       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [e17c7da10f5f3d3a61a22577ab24cce215e5afad3a1246e0ef06af9429e33cc6] <==
	E0919 22:46:27.436673       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:46:27.436689       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:46:27.436699       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:46:27.436705       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	I0919 22:46:27.436759       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 22:46:27.441892       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E0919 22:46:47.437131       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:46:47.437164       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:46:47.437170       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:46:47.437175       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:46:47.437185       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:47:07.437314       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:47:07.437350       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:47:07.437360       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	E0919 22:47:07.437367       1 gc_controller.go:151] "Failed to get node" err="node \"ha-984158-m03\" not found" logger="pod-garbage-collector-controller" node="ha-984158-m03"
	I0919 22:47:07.450170       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-269nt"
	E0919 22:47:07.452916       1 gc_controller.go:256] "Unhandled Error" err="pods \"kindnet-269nt\" not found" logger="UnhandledError"
	I0919 22:47:07.452959       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-984158-m03"
	E0919 22:47:07.462482       1 gc_controller.go:256] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8e9c896-7a8a-4040-b0b9-870c657ba1fc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-19T22:47:07Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"observedGeneration\\\":2,\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"observedGeneration\\\":2,\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"kube-controller-manager-ha-984158-m03\": pods \"kube-controller-manager-ha-984158-m03\" not found" logger="UnhandledError"
	I0919 22:47:07.462524       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-984158-m03"
	I0919 22:47:07.482997       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-984158-m03"
	I0919 22:47:07.483031       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-984158-m03"
	I0919 22:47:07.508136       1 gc_controller.go:259] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-984158-m03"
	I0919 22:47:07.508354       1 gc_controller.go:343] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-k2drm"
	E0919 22:47:07.511665       1 gc_controller.go:256] "Unhandled Error" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"040bf3f7-8d97-4799-b3a2-12b57eec38ef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"},{\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-09-19T22:47:07Z\\\",\\\"message\\\":\\\"PodGC: node no longer exists\\\",\\\"observedGeneration\\\":1,\\\"reason\\\":\\\"DeletionByPodGC\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"DisruptionTarget\\\"}],\\\"phase\\\":\\\"Failed\\\"}}\" for pod \"kube-system\"/\"kube-proxy-k2drm\": pods \"kube-proxy-k2drm\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [b7379b6da67355ec47c959a2bc7e82b54b312757ccf120430e7ea42ea58119d8] <==
	I0919 22:46:25.586985       1 server_linux.go:53] "Using iptables proxy"
	I0919 22:46:25.650511       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 22:46:25.750726       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 22:46:25.750767       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E0919 22:46:25.750866       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 22:46:25.771598       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 22:46:25.771681       1 server_linux.go:132] "Using iptables Proxier"
	I0919 22:46:25.778163       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 22:46:25.778652       1 server.go:527] "Version info" version="v1.34.0"
	I0919 22:46:25.778694       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:46:25.780304       1 config.go:106] "Starting endpoint slice config controller"
	I0919 22:46:25.780335       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 22:46:25.780442       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 22:46:25.780461       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 22:46:25.780482       1 config.go:309] "Starting node config controller"
	I0919 22:46:25.780488       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 22:46:25.780479       1 config.go:200] "Starting service config controller"
	I0919 22:46:25.780510       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 22:46:25.880572       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 22:46:25.880587       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 22:46:25.880636       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 22:46:25.880661       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [59d5671e1926f80170c6c1c41f10118e263e5a91b543ec28fe2ef7db410ee4ce] <==
	I0919 22:46:19.926033       1 serving.go:386] Generated self-signed cert in-memory
	W0919 22:46:23.945396       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 22:46:23.945435       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 22:46:23.945455       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 22:46:23.945465       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 22:46:23.992640       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 22:46:23.992675       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 22:46:23.998640       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 22:46:23.998675       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:46:24.002280       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 22:46:23.998708       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 22:46:24.118890       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:01:19 ha-984158 kubelet[715]: E0919 23:01:19.102584     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322879102325816  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:01:29 ha-984158 kubelet[715]: E0919 23:01:29.104271     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322889103933328  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:01:29 ha-984158 kubelet[715]: E0919 23:01:29.104329     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322889103933328  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:01:39 ha-984158 kubelet[715]: E0919 23:01:39.105830     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322899105487302  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:01:39 ha-984158 kubelet[715]: E0919 23:01:39.105884     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322899105487302  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:01:49 ha-984158 kubelet[715]: E0919 23:01:49.106973     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322909106714603  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:01:49 ha-984158 kubelet[715]: E0919 23:01:49.107005     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322909106714603  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:01:59 ha-984158 kubelet[715]: E0919 23:01:59.108588     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322919108354796  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:01:59 ha-984158 kubelet[715]: E0919 23:01:59.108618     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322919108354796  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:09 ha-984158 kubelet[715]: E0919 23:02:09.109838     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322929109610826  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:09 ha-984158 kubelet[715]: E0919 23:02:09.109876     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322929109610826  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:19 ha-984158 kubelet[715]: E0919 23:02:19.111799     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322939111480891  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:19 ha-984158 kubelet[715]: E0919 23:02:19.111830     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322939111480891  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:29 ha-984158 kubelet[715]: E0919 23:02:29.112936     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322949112733629  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:29 ha-984158 kubelet[715]: E0919 23:02:29.112971     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322949112733629  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:39 ha-984158 kubelet[715]: E0919 23:02:39.114777     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322959114543683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:39 ha-984158 kubelet[715]: E0919 23:02:39.114819     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322959114543683  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:49 ha-984158 kubelet[715]: E0919 23:02:49.116708     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322969116478713  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:49 ha-984158 kubelet[715]: E0919 23:02:49.116746     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322969116478713  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:59 ha-984158 kubelet[715]: E0919 23:02:59.117926     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322979117744261  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:02:59 ha-984158 kubelet[715]: E0919 23:02:59.117959     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322979117744261  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:03:09 ha-984158 kubelet[715]: E0919 23:03:09.119120     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322989118892714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:03:09 ha-984158 kubelet[715]: E0919 23:03:09.119183     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322989118892714  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:03:19 ha-984158 kubelet[715]: E0919 23:03:19.120333     715 eviction_manager.go:259] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: image_filesystems:{timestamp:1758322999120132300  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	Sep 19 23:03:19 ha-984158 kubelet[715]: E0919 23:03:19.120391     715 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: image_filesystems:{timestamp:1758322999120132300  fs_id:{mountpoint:\"/var/lib/containers/storage/overlay-images\"}  used_bytes:{value:149445}  inodes_used:{value:69}}"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-984158 -n ha-984158
helpers_test.go:269: (dbg) Run:  kubectl --context ha-984158 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-7b57f96db7-qctnj
helpers_test.go:282: ======> post-mortem[TestMultiControlPlane/serial/RestartCluster]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context ha-984158 describe pod busybox-7b57f96db7-qctnj
helpers_test.go:290: (dbg) kubectl --context ha-984158 describe pod busybox-7b57f96db7-qctnj:

                                                
                                                
-- stdout --
	Name:             busybox-7b57f96db7-qctnj
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=7b57f96db7
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-7b57f96db7
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-jf9wg (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-jf9wg:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  17m                  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  17m                  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  17m                  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  17m                  default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  16m                  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  6m58s (x2 over 11m)  default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  17m (x2 over 17m)    default-scheduler  0/3 nodes are available: 1 node(s) were unschedulable, 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  118s (x4 over 16m)   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. no new claims to deallocate, preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (1030.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (7.79s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-042753 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-042753 -n no-preload-042753
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-042753 -n no-preload-042753: exit status 2 (329.594835ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-042753 -n no-preload-042753
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-042753 -n no-preload-042753: exit status 2 (351.361468ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-042753 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-042753 -n no-preload-042753
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-042753 -n no-preload-042753: exit status 2 (347.007329ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-042753 -n no-preload-042753
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-042753 -n no-preload-042753: exit status 2 (334.425802ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-042753
helpers_test.go:243: (dbg) docker inspect no-preload-042753:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124",
	        "Created": "2025-09-19T23:20:40.557817758Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 273282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:22:00.903639308Z",
	            "FinishedAt": "2025-09-19T23:21:59.900192386Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124/hosts",
	        "LogPath": "/var/lib/docker/containers/d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124/d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124-json.log",
	        "Name": "/no-preload-042753",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-042753:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-042753",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124",
	                "LowerDir": "/var/lib/docker/overlay2/a9082df055617c8d08fd115aa364b874c5bad34b880b38b0b4863d9a57bacaee-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a9082df055617c8d08fd115aa364b874c5bad34b880b38b0b4863d9a57bacaee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a9082df055617c8d08fd115aa364b874c5bad34b880b38b0b4863d9a57bacaee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a9082df055617c8d08fd115aa364b874c5bad34b880b38b0b4863d9a57bacaee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-042753",
	                "Source": "/var/lib/docker/volumes/no-preload-042753/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-042753",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-042753",
	                "name.minikube.sigs.k8s.io": "no-preload-042753",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9877137170d9a52c3acde345442271113bb96cdcf3ee547304175bdcf70eaedf",
	            "SandboxKey": "/var/run/docker/netns/9877137170d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-042753": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:86:c9:d6:bb:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "953cfdb974f23a5422d321f747f028d933c5997145eda1e683201f237462ca50",
	                    "EndpointID": "55f7280fb6f38b0ca7c886c8630a83ca7041b172a504851292f5a1cc4cfd017d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-042753",
	                        "d4496fbe8d25"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-042753 -n no-preload-042753
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-042753 -n no-preload-042753: exit status 2 (351.430745ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-042753 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-042753 logs -n 25: (1.407461348s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p kubernetes-upgrade-496007                                                                                                                                                                                                                  │ kubernetes-upgrade-496007    │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │ 19 Sep 25 23:20 UTC │
	│ start   │ -p kubernetes-upgrade-496007 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-496007    │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │                     │
	│ delete  │ -p missing-upgrade-322300                                                                                                                                                                                                                     │ missing-upgrade-322300       │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │ 19 Sep 25 23:20 UTC │
	│ start   │ -p no-preload-042753 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │ 19 Sep 25 23:21 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-131186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:21 UTC │
	│ stop    │ -p old-k8s-version-131186 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-131186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:21 UTC │
	│ start   │ -p old-k8s-version-131186 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-042753 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:21 UTC │
	│ stop    │ -p no-preload-042753 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:22 UTC │
	│ addons  │ enable dashboard -p no-preload-042753 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ start   │ -p no-preload-042753 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ image   │ old-k8s-version-131186 image list --format=json                                                                                                                                                                                               │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ pause   │ -p old-k8s-version-131186 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ unpause │ -p old-k8s-version-131186 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ delete  │ -p old-k8s-version-131186                                                                                                                                                                                                                     │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ delete  │ -p old-k8s-version-131186                                                                                                                                                                                                                     │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ start   │ -p embed-certs-756077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │                     │
	│ start   │ -p cert-expiration-463082 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-463082       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ delete  │ -p cert-expiration-463082                                                                                                                                                                                                                     │ cert-expiration-463082       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ delete  │ -p disable-driver-mounts-815969                                                                                                                                                                                                               │ disable-driver-mounts-815969 │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ start   │ -p default-k8s-diff-port-523696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-523696 │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │                     │
	│ image   │ no-preload-042753 image list --format=json                                                                                                                                                                                                    │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ pause   │ -p no-preload-042753 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ unpause │ -p no-preload-042753 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:22:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:22:45.595913  283801 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:22:45.596300  283801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:22:45.596314  283801 out.go:374] Setting ErrFile to fd 2...
	I0919 23:22:45.596320  283801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:22:45.596650  283801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 23:22:45.597836  283801 out.go:368] Setting JSON to false
	I0919 23:22:45.600344  283801 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7516,"bootTime":1758316650,"procs":763,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:22:45.600483  283801 start.go:140] virtualization: kvm guest
	I0919 23:22:45.602800  283801 out.go:179] * [default-k8s-diff-port-523696] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:22:45.604356  283801 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:22:45.604416  283801 notify.go:220] Checking for updates...
	I0919 23:22:45.607047  283801 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:22:45.608835  283801 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:22:45.610025  283801 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 23:22:45.611338  283801 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:22:45.613443  283801 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W0919 23:22:41.543989  272997 pod_ready.go:104] pod "coredns-66bc5c9577-5jl4c" is not "Ready", error: <nil>
	W0919 23:22:44.041707  272997 pod_ready.go:104] pod "coredns-66bc5c9577-5jl4c" is not "Ready", error: <nil>
	I0919 23:22:45.615620  283801 config.go:182] Loaded profile config "embed-certs-756077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:22:45.615751  283801 config.go:182] Loaded profile config "kubernetes-upgrade-496007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:22:45.615913  283801 config.go:182] Loaded profile config "no-preload-042753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:22:45.616059  283801 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:22:45.652598  283801 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:22:45.652707  283801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:22:45.727444  283801 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 23:22:45.715390918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:22:45.727557  283801 docker.go:318] overlay module found
	I0919 23:22:45.729602  283801 out.go:179] * Using the docker driver based on user configuration
	I0919 23:22:45.731157  283801 start.go:304] selected driver: docker
	I0919 23:22:45.731178  283801 start.go:918] validating driver "docker" against <nil>
	I0919 23:22:45.731194  283801 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:22:45.731820  283801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:22:45.808411  283801 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 23:22:45.795341118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:22:45.808655  283801 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 23:22:45.808919  283801 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:22:45.813276  283801 out.go:179] * Using Docker driver with root privileges
	I0919 23:22:45.817282  283801 cni.go:84] Creating CNI manager for ""
	I0919 23:22:45.817379  283801 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:22:45.817396  283801 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 23:22:45.817497  283801 start.go:348] cluster config:
	{Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0919 23:22:45.819141  283801 out.go:179] * Starting "default-k8s-diff-port-523696" primary control-plane node in "default-k8s-diff-port-523696" cluster
	I0919 23:22:45.820325  283801 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 23:22:45.821800  283801 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:22:45.823206  283801 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:22:45.823238  283801 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:22:45.823262  283801 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 23:22:45.823275  283801 cache.go:58] Caching tarball of preloaded images
	I0919 23:22:45.823389  283801 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 23:22:45.823406  283801 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 23:22:45.823516  283801 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/config.json ...
	I0919 23:22:45.823540  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/config.json: {Name:mk094da9574bd9890ccffdc8df893ebc18aee319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:45.857175  283801 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:22:45.857201  283801 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:22:45.857220  283801 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:22:45.857251  283801 start.go:360] acquireMachinesLock for default-k8s-diff-port-523696: {Name:mk3e8cf47fc7b3222021a2ea03ba5708af5f316a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:22:45.857368  283801 start.go:364] duration metric: took 96.802µs to acquireMachinesLock for "default-k8s-diff-port-523696"
	I0919 23:22:45.857401  283801 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:22:45.857493  283801 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:22:41.739404  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 23:22:41.739471  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:22:41.739533  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:22:41.785110  257816 cri.go:89] found id: "314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:41.785136  257816 cri.go:89] found id: "2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	I0919 23:22:41.785143  257816 cri.go:89] found id: ""
	I0919 23:22:41.785152  257816 logs.go:282] 2 containers: [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba]
	I0919 23:22:41.785207  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:41.789586  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:41.793327  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:22:41.793391  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:22:41.834505  257816 cri.go:89] found id: ""
	I0919 23:22:41.834531  257816 logs.go:282] 0 containers: []
	W0919 23:22:41.834541  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:22:41.834548  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:22:41.834601  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:22:41.876620  257816 cri.go:89] found id: ""
	I0919 23:22:41.876649  257816 logs.go:282] 0 containers: []
	W0919 23:22:41.876659  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:22:41.876667  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:22:41.876722  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:22:41.917202  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:41.917227  257816 cri.go:89] found id: ""
	I0919 23:22:41.917237  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:22:41.917304  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:41.921398  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:22:41.921463  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:22:41.963433  257816 cri.go:89] found id: ""
	I0919 23:22:41.963460  257816 logs.go:282] 0 containers: []
	W0919 23:22:41.963471  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:22:41.963478  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:22:41.963526  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:22:42.004741  257816 cri.go:89] found id: "31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:42.004766  257816 cri.go:89] found id: "a1d0bf430e75c8875928d0d3245a97b7045ff5818ced3b6ed7b44b24affe0dc0"
	I0919 23:22:42.004776  257816 cri.go:89] found id: ""
	I0919 23:22:42.004801  257816 logs.go:282] 2 containers: [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074 a1d0bf430e75c8875928d0d3245a97b7045ff5818ced3b6ed7b44b24affe0dc0]
	I0919 23:22:42.004869  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:42.009032  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:42.015263  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:22:42.015338  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:22:42.068053  257816 cri.go:89] found id: ""
	I0919 23:22:42.068084  257816 logs.go:282] 0 containers: []
	W0919 23:22:42.068094  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:22:42.068115  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:22:42.068174  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:22:42.122936  257816 cri.go:89] found id: ""
	I0919 23:22:42.122962  257816 logs.go:282] 0 containers: []
	W0919 23:22:42.122973  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:22:42.122992  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:22:42.123007  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:22:42.222851  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:22:42.222887  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 23:22:45.859755  283801 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:22:45.860002  283801 start.go:159] libmachine.API.Create for "default-k8s-diff-port-523696" (driver="docker")
	I0919 23:22:45.860038  283801 client.go:168] LocalClient.Create starting
	I0919 23:22:45.860140  283801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 23:22:45.860177  283801 main.go:141] libmachine: Decoding PEM data...
	I0919 23:22:45.860192  283801 main.go:141] libmachine: Parsing certificate...
	I0919 23:22:45.860248  283801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 23:22:45.860275  283801 main.go:141] libmachine: Decoding PEM data...
	I0919 23:22:45.860283  283801 main.go:141] libmachine: Parsing certificate...
	I0919 23:22:45.860595  283801 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:22:45.881526  283801 cli_runner.go:211] docker network inspect default-k8s-diff-port-523696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:22:45.881627  283801 network_create.go:284] running [docker network inspect default-k8s-diff-port-523696] to gather additional debugging logs...
	I0919 23:22:45.881652  283801 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523696
	W0919 23:22:45.904655  283801 cli_runner.go:211] docker network inspect default-k8s-diff-port-523696 returned with exit code 1
	I0919 23:22:45.904691  283801 network_create.go:287] error running [docker network inspect default-k8s-diff-port-523696]: docker network inspect default-k8s-diff-port-523696: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-523696 not found
	I0919 23:22:45.904714  283801 network_create.go:289] output of [docker network inspect default-k8s-diff-port-523696]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-523696 not found
	
	** /stderr **
	I0919 23:22:45.904810  283801 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:22:45.925215  283801 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8b1b6c79ac61 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:3e:90:cd:d5:3a} reservation:<nil>}
	I0919 23:22:45.925942  283801 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-20306adbc8e7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:2f:5e:f5:4d:ee} reservation:<nil>}
	I0919 23:22:45.926646  283801 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e3bc7e48275b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:f1:66:e9:e5:54} reservation:<nil>}
	I0919 23:22:45.927471  283801 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d23ca0}
	I0919 23:22:45.927494  283801 network_create.go:124] attempt to create docker network default-k8s-diff-port-523696 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0919 23:22:45.927559  283801 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-523696 default-k8s-diff-port-523696
	I0919 23:22:45.999193  283801 network_create.go:108] docker network default-k8s-diff-port-523696 192.168.76.0/24 created
	I0919 23:22:45.999231  283801 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-523696" container
	I0919 23:22:45.999318  283801 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:22:46.020910  283801 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-523696 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523696 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:22:46.043037  283801 oci.go:103] Successfully created a docker volume default-k8s-diff-port-523696
	I0919 23:22:46.043169  283801 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-523696-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523696 --entrypoint /usr/bin/test -v default-k8s-diff-port-523696:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:22:46.516794  283801 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-523696
	I0919 23:22:46.516832  283801 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:22:46.516865  283801 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:22:46.516943  283801 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-523696:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	W0919 23:22:46.541945  272997 pod_ready.go:104] pod "coredns-66bc5c9577-5jl4c" is not "Ready", error: <nil>
	W0919 23:22:49.044069  272997 pod_ready.go:104] pod "coredns-66bc5c9577-5jl4c" is not "Ready", error: <nil>
	I0919 23:22:50.668432  278994 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:22:50.668528  278994 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:22:50.668662  278994 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:22:50.668733  278994 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:22:50.668780  278994 kubeadm.go:310] OS: Linux
	I0919 23:22:50.668861  278994 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:22:50.668933  278994 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:22:50.669004  278994 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:22:50.669077  278994 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:22:50.669161  278994 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:22:50.669211  278994 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:22:50.669254  278994 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:22:50.669292  278994 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:22:50.669380  278994 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:22:50.669519  278994 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:22:50.669630  278994 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:22:50.669709  278994 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:22:50.806400  278994 out.go:252]   - Generating certificates and keys ...
	I0919 23:22:50.806510  278994 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:22:50.806621  278994 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:22:50.806711  278994 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:22:50.806857  278994 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:22:50.806971  278994 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:22:50.807032  278994 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:22:50.807088  278994 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:22:50.807233  278994 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-756077 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0919 23:22:50.807297  278994 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:22:50.807423  278994 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-756077 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0919 23:22:50.807489  278994 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:22:50.807565  278994 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:22:50.807612  278994 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:22:50.807677  278994 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:22:50.807748  278994 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:22:50.807817  278994 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:22:50.807898  278994 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:22:50.807980  278994 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:22:50.808035  278994 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:22:50.808186  278994 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:22:50.808314  278994 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:22:50.944612  278994 out.go:252]   - Booting up control plane ...
	I0919 23:22:50.944749  278994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:22:50.944847  278994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:22:50.944951  278994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:22:50.945123  278994 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:22:50.945287  278994 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:22:50.945456  278994 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:22:50.945588  278994 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:22:50.945660  278994 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:22:50.945868  278994 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:22:50.946023  278994 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:22:50.946097  278994 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.079086ms
	I0919 23:22:50.946255  278994 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:22:50.946400  278994 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0919 23:22:50.946538  278994 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:22:50.946653  278994 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:22:50.946765  278994 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.40569387s
	I0919 23:22:50.946915  278994 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.944914128s
	I0919 23:22:50.947015  278994 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.503168258s
	I0919 23:22:50.947183  278994 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:22:50.947347  278994 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:22:50.947433  278994 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:22:50.947708  278994 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-756077 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:22:50.947779  278994 kubeadm.go:310] [bootstrap-token] Using token: 05dt5h.no6yext1q5butvcd
	I0919 23:22:50.950871  278994 out.go:252]   - Configuring RBAC rules ...
	I0919 23:22:50.951051  278994 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:22:50.951203  278994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:22:50.951406  278994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:22:50.951568  278994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:22:50.951689  278994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:22:50.951804  278994 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:22:50.952051  278994 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:22:50.952129  278994 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:22:50.952196  278994 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:22:50.952205  278994 kubeadm.go:310] 
	I0919 23:22:50.952262  278994 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:22:50.952275  278994 kubeadm.go:310] 
	I0919 23:22:50.952369  278994 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:22:50.952426  278994 kubeadm.go:310] 
	I0919 23:22:50.952508  278994 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:22:50.952638  278994 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:22:50.952734  278994 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:22:50.952745  278994 kubeadm.go:310] 
	I0919 23:22:50.952809  278994 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:22:50.952815  278994 kubeadm.go:310] 
	I0919 23:22:50.952875  278994 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:22:50.952887  278994 kubeadm.go:310] 
	I0919 23:22:50.952968  278994 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:22:50.953202  278994 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:22:50.953340  278994 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:22:50.953365  278994 kubeadm.go:310] 
	I0919 23:22:50.953570  278994 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:22:50.953689  278994 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:22:50.953696  278994 kubeadm.go:310] 
	I0919 23:22:50.953805  278994 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 05dt5h.no6yext1q5butvcd \
	I0919 23:22:50.953932  278994 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 \
	I0919 23:22:50.953959  278994 kubeadm.go:310] 	--control-plane 
	I0919 23:22:50.953963  278994 kubeadm.go:310] 
	I0919 23:22:50.954120  278994 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:22:50.954128  278994 kubeadm.go:310] 
	I0919 23:22:50.954237  278994 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 05dt5h.no6yext1q5butvcd \
	I0919 23:22:50.954381  278994 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 
	I0919 23:22:50.954392  278994 cni.go:84] Creating CNI manager for ""
	I0919 23:22:50.954402  278994 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:22:50.958294  278994 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 23:22:50.961078  278994 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 23:22:50.966992  278994 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:22:50.967027  278994 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 23:22:50.995512  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:22:51.269484  278994 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:22:51.269602  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:51.269685  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-756077 minikube.k8s.io/updated_at=2025_09_19T23_22_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=embed-certs-756077 minikube.k8s.io/primary=true
	I0919 23:22:51.356378  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:51.368546  278994 ops.go:34] apiserver oom_adj: -16
	I0919 23:22:51.856941  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:52.357350  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:52.857348  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W0919 23:22:51.044396  272997 pod_ready.go:104] pod "coredns-66bc5c9577-5jl4c" is not "Ready", error: <nil>
	I0919 23:22:52.541643  272997 pod_ready.go:94] pod "coredns-66bc5c9577-5jl4c" is "Ready"
	I0919 23:22:52.541668  272997 pod_ready.go:86] duration metric: took 40.505567802s for pod "coredns-66bc5c9577-5jl4c" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.544465  272997 pod_ready.go:83] waiting for pod "etcd-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.548683  272997 pod_ready.go:94] pod "etcd-no-preload-042753" is "Ready"
	I0919 23:22:52.548708  272997 pod_ready.go:86] duration metric: took 4.220664ms for pod "etcd-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.551162  272997 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.556198  272997 pod_ready.go:94] pod "kube-apiserver-no-preload-042753" is "Ready"
	I0919 23:22:52.556228  272997 pod_ready.go:86] duration metric: took 5.043479ms for pod "kube-apiserver-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.558721  272997 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.739379  272997 pod_ready.go:94] pod "kube-controller-manager-no-preload-042753" is "Ready"
	I0919 23:22:52.739421  272997 pod_ready.go:86] duration metric: took 180.670358ms for pod "kube-controller-manager-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.940072  272997 pod_ready.go:83] waiting for pod "kube-proxy-bgkfm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:53.339684  272997 pod_ready.go:94] pod "kube-proxy-bgkfm" is "Ready"
	I0919 23:22:53.339716  272997 pod_ready.go:86] duration metric: took 399.589997ms for pod "kube-proxy-bgkfm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:53.539747  272997 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:53.939792  272997 pod_ready.go:94] pod "kube-scheduler-no-preload-042753" is "Ready"
	I0919 23:22:53.939821  272997 pod_ready.go:86] duration metric: took 400.050647ms for pod "kube-scheduler-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:53.939836  272997 pod_ready.go:40] duration metric: took 41.90921689s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:22:53.995997  272997 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:22:53.997748  272997 out.go:179] * Done! kubectl is now configured to use "no-preload-042753" cluster and "default" namespace by default
	I0919 23:22:50.970795  283801 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-523696:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.453777091s)
	I0919 23:22:50.970834  283801 kic.go:203] duration metric: took 4.453965068s to extract preloaded images to volume ...
	W0919 23:22:50.970939  283801 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:22:50.970979  283801 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:22:50.971020  283801 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:22:51.047347  283801 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-523696 --name default-k8s-diff-port-523696 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523696 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-523696 --network default-k8s-diff-port-523696 --ip 192.168.76.2 --volume default-k8s-diff-port-523696:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:22:51.398204  283801 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Running}}
	I0919 23:22:51.423380  283801 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:22:51.446056  283801 cli_runner.go:164] Run: docker exec default-k8s-diff-port-523696 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:22:51.501886  283801 oci.go:144] the created container "default-k8s-diff-port-523696" has a running status.
	I0919 23:22:51.501916  283801 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa...
	I0919 23:22:51.550645  283801 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:22:51.583303  283801 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:22:51.605265  283801 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:22:51.605289  283801 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-523696 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:22:51.670333  283801 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:22:51.698050  283801 machine.go:93] provisionDockerMachine start ...
	I0919 23:22:51.698163  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:51.725188  283801 main.go:141] libmachine: Using SSH client type: native
	I0919 23:22:51.725624  283801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I0919 23:22:51.725648  283801 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:22:51.871213  283801 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523696
	
	I0919 23:22:51.871246  283801 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-523696"
	I0919 23:22:51.871338  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:51.896996  283801 main.go:141] libmachine: Using SSH client type: native
	I0919 23:22:51.897302  283801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I0919 23:22:51.897331  283801 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-523696 && echo "default-k8s-diff-port-523696" | sudo tee /etc/hostname
	I0919 23:22:52.053414  283801 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523696
	
	I0919 23:22:52.053491  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:52.074331  283801 main.go:141] libmachine: Using SSH client type: native
	I0919 23:22:52.074601  283801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I0919 23:22:52.074631  283801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-523696' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-523696/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-523696' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:22:52.213027  283801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:22:52.213062  283801 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 23:22:52.213151  283801 ubuntu.go:190] setting up certificates
	I0919 23:22:52.213166  283801 provision.go:84] configureAuth start
	I0919 23:22:52.213261  283801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:22:52.231215  283801 provision.go:143] copyHostCerts
	I0919 23:22:52.231283  283801 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 23:22:52.231296  283801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 23:22:52.231390  283801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 23:22:52.231551  283801 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 23:22:52.231566  283801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 23:22:52.231606  283801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 23:22:52.231687  283801 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 23:22:52.231697  283801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 23:22:52.231736  283801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 23:22:52.231824  283801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-523696 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-523696 localhost minikube]
	I0919 23:22:52.685747  283801 provision.go:177] copyRemoteCerts
	I0919 23:22:52.685816  283801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:22:52.685861  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:52.704970  283801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:22:52.803041  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:22:52.831600  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0919 23:22:52.858378  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 23:22:52.887800  283801 provision.go:87] duration metric: took 674.617806ms to configureAuth
	I0919 23:22:52.887832  283801 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:22:52.888001  283801 config.go:182] Loaded profile config "default-k8s-diff-port-523696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:22:52.888096  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:52.908462  283801 main.go:141] libmachine: Using SSH client type: native
	I0919 23:22:52.908689  283801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I0919 23:22:52.908711  283801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 23:22:53.161476  283801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 23:22:53.161503  283801 machine.go:96] duration metric: took 1.463430077s to provisionDockerMachine
	I0919 23:22:53.161520  283801 client.go:171] duration metric: took 7.301466529s to LocalClient.Create
	I0919 23:22:53.161541  283801 start.go:167] duration metric: took 7.301538688s to libmachine.API.Create "default-k8s-diff-port-523696"
	I0919 23:22:53.161549  283801 start.go:293] postStartSetup for "default-k8s-diff-port-523696" (driver="docker")
	I0919 23:22:53.161566  283801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:22:53.161627  283801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:22:53.161662  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:53.181146  283801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:22:53.282060  283801 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:22:53.285696  283801 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:22:53.285738  283801 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:22:53.285748  283801 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:22:53.285755  283801 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:22:53.285766  283801 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 23:22:53.285833  283801 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 23:22:53.285930  283801 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 23:22:53.286090  283801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:22:53.296191  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 23:22:53.325713  283801 start.go:296] duration metric: took 164.145408ms for postStartSetup
	I0919 23:22:53.326128  283801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:22:53.344693  283801 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/config.json ...
	I0919 23:22:53.344984  283801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:22:53.345028  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:53.363816  283801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:22:53.458166  283801 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:22:53.462892  283801 start.go:128] duration metric: took 7.605384435s to createHost
	I0919 23:22:53.462920  283801 start.go:83] releasing machines lock for "default-k8s-diff-port-523696", held for 7.605536132s
	I0919 23:22:53.463009  283801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:22:53.481448  283801 ssh_runner.go:195] Run: cat /version.json
	I0919 23:22:53.481507  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:53.481532  283801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:22:53.481610  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:53.502356  283801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:22:53.502788  283801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:22:53.596611  283801 ssh_runner.go:195] Run: systemctl --version
	I0919 23:22:53.674184  283801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 23:22:53.818922  283801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:22:53.823893  283801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:22:53.848892  283801 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:22:53.848977  283801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:22:53.883792  283801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:22:53.883820  283801 start.go:495] detecting cgroup driver to use...
	I0919 23:22:53.883868  283801 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:22:53.883915  283801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:22:53.901387  283801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:22:53.914502  283801 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:22:53.914565  283801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:22:53.932453  283801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:22:53.952211  283801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:22:54.032454  283801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:22:54.120602  283801 docker.go:234] disabling docker service ...
	I0919 23:22:54.120684  283801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:22:54.139858  283801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:22:54.154159  283801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:22:54.229745  283801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:22:54.420914  283801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:22:54.434730  283801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:22:54.453483  283801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 23:22:54.453558  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.467233  283801 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 23:22:54.467296  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.478437  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.489333  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.500662  283801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:22:54.511714  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.522985  283801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.540467  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.552149  283801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:22:54.561457  283801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:22:54.571524  283801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:22:54.639993  283801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 23:22:54.749200  283801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 23:22:54.749294  283801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 23:22:54.754519  283801 start.go:563] Will wait 60s for crictl version
	I0919 23:22:54.754593  283801 ssh_runner.go:195] Run: which crictl
	I0919 23:22:54.758837  283801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:22:54.794349  283801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 23:22:54.794444  283801 ssh_runner.go:195] Run: crio --version
	I0919 23:22:54.832527  283801 ssh_runner.go:195] Run: crio --version
	I0919 23:22:54.875403  283801 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 23:22:53.356801  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:53.856908  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:54.357010  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:54.857181  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:54.936054  278994 kubeadm.go:1105] duration metric: took 3.666502786s to wait for elevateKubeSystemPrivileges
	I0919 23:22:54.936088  278994 kubeadm.go:394] duration metric: took 15.878020809s to StartCluster
	I0919 23:22:54.936125  278994 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:54.936215  278994 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:22:54.937766  278994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:54.938034  278994 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:22:54.938060  278994 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:22:54.938168  278994 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:22:54.938272  278994 config.go:182] Loaded profile config "embed-certs-756077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:22:54.938264  278994 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-756077"
	I0919 23:22:54.938301  278994 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-756077"
	I0919 23:22:54.938756  278994 host.go:66] Checking if "embed-certs-756077" exists ...
	I0919 23:22:54.938783  278994 addons.go:69] Setting default-storageclass=true in profile "embed-certs-756077"
	I0919 23:22:54.938826  278994 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-756077"
	I0919 23:22:54.939448  278994 cli_runner.go:164] Run: docker container inspect embed-certs-756077 --format={{.State.Status}}
	I0919 23:22:54.939705  278994 cli_runner.go:164] Run: docker container inspect embed-certs-756077 --format={{.State.Status}}
	I0919 23:22:54.941840  278994 out.go:179] * Verifying Kubernetes components...
	I0919 23:22:54.944232  278994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:22:54.969686  278994 addons.go:238] Setting addon default-storageclass=true in "embed-certs-756077"
	I0919 23:22:54.969732  278994 host.go:66] Checking if "embed-certs-756077" exists ...
	I0919 23:22:54.970380  278994 cli_runner.go:164] Run: docker container inspect embed-certs-756077 --format={{.State.Status}}
	I0919 23:22:54.970557  278994 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:22:54.976983  278994 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:22:54.977009  278994 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:22:54.977123  278994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756077
	I0919 23:22:55.001926  278994 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:22:55.001953  278994 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:22:55.002125  278994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756077
	I0919 23:22:55.005386  278994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/embed-certs-756077/id_rsa Username:docker}
	I0919 23:22:55.027432  278994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/embed-certs-756077/id_rsa Username:docker}
	I0919 23:22:55.045999  278994 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:22:55.077518  278994 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:22:55.141609  278994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:22:55.152579  278994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:22:55.261828  278994 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0919 23:22:55.263261  278994 node_ready.go:35] waiting up to 6m0s for node "embed-certs-756077" to be "Ready" ...
	I0919 23:22:55.518814  278994 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:22:54.876787  283801 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:22:54.897253  283801 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0919 23:22:54.901429  283801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:22:54.914909  283801 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:22:54.915053  283801 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:22:54.915138  283801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:22:55.027207  283801 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 23:22:55.027231  283801 crio.go:433] Images already preloaded, skipping extraction
	I0919 23:22:55.027287  283801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:22:55.082804  283801 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 23:22:55.082839  283801 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:22:55.082849  283801 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 crio true true} ...
	I0919 23:22:55.083037  283801 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-523696 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:22:55.083170  283801 ssh_runner.go:195] Run: crio config
	I0919 23:22:55.149830  283801 cni.go:84] Creating CNI manager for ""
	I0919 23:22:55.149858  283801 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:22:55.149871  283801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:22:55.149897  283801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-523696 NodeName:default-k8s-diff-port-523696 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:22:55.150064  283801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-523696"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:22:55.150142  283801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:22:55.161859  283801 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:22:55.161914  283801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:22:55.174514  283801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0919 23:22:55.200808  283801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:22:55.229639  283801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0919 23:22:55.253552  283801 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:22:55.258542  283801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:22:55.275361  283801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:22:55.365446  283801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:22:55.388230  283801 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696 for IP: 192.168.76.2
	I0919 23:22:55.388257  283801 certs.go:194] generating shared ca certs ...
	I0919 23:22:55.388277  283801 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:55.388450  283801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 23:22:55.388501  283801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 23:22:55.388514  283801 certs.go:256] generating profile certs ...
	I0919 23:22:55.388582  283801 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.key
	I0919 23:22:55.388598  283801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.crt with IP's: []
	I0919 23:22:55.491075  283801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.crt ...
	I0919 23:22:55.491124  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.crt: {Name:mk84fb74a6745b447df98b265e5b3c1639ecbc3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:55.491406  283801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.key ...
	I0919 23:22:55.491456  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.key: {Name:mk1118e088979888764eed348877b43632df1aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:55.491604  283801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key.3ddce01e
	I0919 23:22:55.491628  283801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt.3ddce01e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0919 23:22:52.297243  257816 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.074328978s)
	W0919 23:22:52.297295  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0919 23:22:52.297307  257816 logs.go:123] Gathering logs for kube-apiserver [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390] ...
	I0919 23:22:52.297321  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:52.345170  257816 logs.go:123] Gathering logs for kube-apiserver [2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba] ...
	I0919 23:22:52.345222  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	I0919 23:22:52.394986  257816 logs.go:123] Gathering logs for kube-controller-manager [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074] ...
	I0919 23:22:52.395033  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:52.444079  257816 logs.go:123] Gathering logs for kube-controller-manager [a1d0bf430e75c8875928d0d3245a97b7045ff5818ced3b6ed7b44b24affe0dc0] ...
	I0919 23:22:52.444143  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1d0bf430e75c8875928d0d3245a97b7045ff5818ced3b6ed7b44b24affe0dc0"
	I0919 23:22:52.483830  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:22:52.483856  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:22:52.525769  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:22:52.525806  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:22:52.545275  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:22:52.545300  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:52.618883  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:22:52.618920  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:22:55.160688  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:22:55.161219  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:22:55.161297  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:22:55.161376  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:22:55.212699  257816 cri.go:89] found id: "314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:55.212733  257816 cri.go:89] found id: "2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	I0919 23:22:55.212739  257816 cri.go:89] found id: ""
	I0919 23:22:55.212749  257816 logs.go:282] 2 containers: [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba]
	I0919 23:22:55.212807  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:55.218135  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:55.223313  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:22:55.223389  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:22:55.274293  257816 cri.go:89] found id: ""
	I0919 23:22:55.274322  257816 logs.go:282] 0 containers: []
	W0919 23:22:55.274331  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:22:55.274339  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:22:55.274407  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:22:55.321135  257816 cri.go:89] found id: ""
	I0919 23:22:55.321165  257816 logs.go:282] 0 containers: []
	W0919 23:22:55.321176  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:22:55.321184  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:22:55.321248  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:22:55.363240  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:55.363266  257816 cri.go:89] found id: ""
	I0919 23:22:55.363276  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:22:55.363344  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:55.368142  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:22:55.368209  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:22:55.422049  257816 cri.go:89] found id: ""
	I0919 23:22:55.422079  257816 logs.go:282] 0 containers: []
	W0919 23:22:55.422089  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:22:55.422097  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:22:55.422184  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:22:55.473271  257816 cri.go:89] found id: "31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:55.473393  257816 cri.go:89] found id: ""
	I0919 23:22:55.473411  257816 logs.go:282] 1 containers: [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074]
	I0919 23:22:55.473494  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:55.479176  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:22:55.479299  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:22:55.530375  257816 cri.go:89] found id: ""
	I0919 23:22:55.530402  257816 logs.go:282] 0 containers: []
	W0919 23:22:55.530412  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:22:55.530419  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:22:55.530477  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:22:55.574195  257816 cri.go:89] found id: ""
	I0919 23:22:55.574228  257816 logs.go:282] 0 containers: []
	W0919 23:22:55.574239  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:22:55.574257  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:22:55.574279  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:22:55.652936  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:22:55.652960  257816 logs.go:123] Gathering logs for kube-apiserver [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390] ...
	I0919 23:22:55.652977  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:55.700673  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:22:55.700705  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:55.776574  257816 logs.go:123] Gathering logs for kube-controller-manager [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074] ...
	I0919 23:22:55.776616  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:55.825085  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:22:55.825137  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:22:55.873634  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:22:55.873672  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:22:55.916948  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:22:55.916984  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:22:55.520432  278994 addons.go:514] duration metric: took 582.260798ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:22:55.766300  278994 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-756077" context rescaled to 1 replicas
	W0919 23:22:57.267427  278994 node_ready.go:57] node "embed-certs-756077" has "Ready":"False" status (will retry)
	I0919 23:22:55.978492  283801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt.3ddce01e ...
	I0919 23:22:55.978523  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt.3ddce01e: {Name:mk984ddd05acd5e1e36fb52bba3da8de3378e2a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:55.978711  283801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key.3ddce01e ...
	I0919 23:22:55.978727  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key.3ddce01e: {Name:mka6692298239dcc8c1eff437a73ac5078ad7789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:55.978821  283801 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt.3ddce01e -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt
	I0919 23:22:55.978936  283801 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key.3ddce01e -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key
	I0919 23:22:55.979028  283801 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key
	I0919 23:22:55.979061  283801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.crt with IP's: []
	I0919 23:22:56.104762  283801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.crt ...
	I0919 23:22:56.104802  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.crt: {Name:mkaaaddcc9511ddaf8101dd2778396387c9f0120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:56.105020  283801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key ...
	I0919 23:22:56.105042  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key: {Name:mk6e247b15616b2ba853c2b32e9875074a5777ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:56.105316  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 23:22:56.105377  283801 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 23:22:56.105388  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:22:56.105420  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:22:56.105446  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:22:56.105473  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 23:22:56.105527  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 23:22:56.106333  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:22:56.136307  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 23:22:56.166269  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:22:56.197270  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 23:22:56.232817  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 23:22:56.267968  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:22:56.302225  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:22:56.329369  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:22:56.357268  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 23:22:56.391291  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:22:56.419600  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 23:22:56.448910  283801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:22:56.472201  283801 ssh_runner.go:195] Run: openssl version
	I0919 23:22:56.478518  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 23:22:56.490827  283801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 23:22:56.495619  283801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 23:22:56.495679  283801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 23:22:56.503861  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:22:56.515223  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:22:56.526847  283801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:22:56.531858  283801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:22:56.531919  283801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:22:56.539257  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:22:56.549509  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 23:22:56.560429  283801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 23:22:56.564500  283801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 23:22:56.564562  283801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 23:22:56.572479  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 23:22:56.584023  283801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:22:56.587805  283801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:22:56.587860  283801 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:22:56.587940  283801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 23:22:56.588001  283801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:22:56.625836  283801 cri.go:89] found id: ""
	I0919 23:22:56.625908  283801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:22:56.636429  283801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:22:56.646739  283801 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:22:56.646801  283801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:22:56.657786  283801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:22:56.657808  283801 kubeadm.go:157] found existing configuration files:
	
	I0919 23:22:56.657854  283801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0919 23:22:56.667999  283801 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:22:56.668060  283801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:22:56.677503  283801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0919 23:22:56.687116  283801 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:22:56.687196  283801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:22:56.695842  283801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0919 23:22:56.705483  283801 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:22:56.705544  283801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:22:56.714929  283801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0919 23:22:56.724270  283801 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:22:56.724354  283801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:22:56.734083  283801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:22:56.792357  283801 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:22:56.853438  283801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:22:56.020455  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:22:56.020490  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:22:56.039034  257816 logs.go:123] Gathering logs for kube-apiserver [2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba] ...
	I0919 23:22:56.039076  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	W0919 23:22:56.077680  257816 logs.go:130] failed kube-apiserver [2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba": Process exited with status 1
	stdout:
	
	stderr:
	E0919 23:22:56.074657    4282 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba\": container with ID starting with 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba not found: ID does not exist" containerID="2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	time="2025-09-19T23:22:56Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba\": container with ID starting with 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba not found: ID does not exist"
	 output: 
	** stderr ** 
	E0919 23:22:56.074657    4282 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba\": container with ID starting with 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba not found: ID does not exist" containerID="2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	time="2025-09-19T23:22:56Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba\": container with ID starting with 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba not found: ID does not exist"
	
	** /stderr **
	I0919 23:22:58.579314  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:22:58.579784  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:22:58.579841  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:22:58.579899  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:22:58.616892  257816 cri.go:89] found id: "314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:58.616924  257816 cri.go:89] found id: ""
	I0919 23:22:58.616934  257816 logs.go:282] 1 containers: [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390]
	I0919 23:22:58.617003  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:58.621687  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:22:58.621752  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:22:58.659451  257816 cri.go:89] found id: ""
	I0919 23:22:58.659475  257816 logs.go:282] 0 containers: []
	W0919 23:22:58.659483  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:22:58.659488  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:22:58.659547  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:22:58.697871  257816 cri.go:89] found id: ""
	I0919 23:22:58.697895  257816 logs.go:282] 0 containers: []
	W0919 23:22:58.697904  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:22:58.697912  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:22:58.697967  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:22:58.737444  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:58.737470  257816 cri.go:89] found id: ""
	I0919 23:22:58.737479  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:22:58.737531  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:58.741438  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:22:58.741504  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:22:58.781147  257816 cri.go:89] found id: ""
	I0919 23:22:58.781173  257816 logs.go:282] 0 containers: []
	W0919 23:22:58.781183  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:22:58.781189  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:22:58.781244  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:22:58.822000  257816 cri.go:89] found id: "31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:58.822029  257816 cri.go:89] found id: ""
	I0919 23:22:58.822038  257816 logs.go:282] 1 containers: [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074]
	I0919 23:22:58.822132  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:58.826176  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:22:58.826240  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:22:58.870692  257816 cri.go:89] found id: ""
	I0919 23:22:58.870721  257816 logs.go:282] 0 containers: []
	W0919 23:22:58.870732  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:22:58.870740  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:22:58.870803  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:22:58.907037  257816 cri.go:89] found id: ""
	I0919 23:22:58.907060  257816 logs.go:282] 0 containers: []
	W0919 23:22:58.907068  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:22:58.907075  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:22:58.907093  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:22:58.924701  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:22:58.924728  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:22:58.996196  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:22:58.996223  257816 logs.go:123] Gathering logs for kube-apiserver [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390] ...
	I0919 23:22:58.996237  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:59.041860  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:22:59.041887  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:59.106571  257816 logs.go:123] Gathering logs for kube-controller-manager [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074] ...
	I0919 23:22:59.106602  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:59.145274  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:22:59.145297  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:22:59.195054  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:22:59.195089  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:22:59.241289  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:22:59.241316  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 23:22:59.267677  278994 node_ready.go:57] node "embed-certs-756077" has "Ready":"False" status (will retry)
	W0919 23:23:01.766814  278994 node_ready.go:57] node "embed-certs-756077" has "Ready":"False" status (will retry)
	I0919 23:23:01.833127  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:23:01.833556  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:23:01.833609  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:23:01.833662  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:23:01.872807  257816 cri.go:89] found id: "314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:23:01.872827  257816 cri.go:89] found id: ""
	I0919 23:23:01.872834  257816 logs.go:282] 1 containers: [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390]
	I0919 23:23:01.872886  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:01.876742  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:23:01.876809  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:23:01.921442  257816 cri.go:89] found id: ""
	I0919 23:23:01.921475  257816 logs.go:282] 0 containers: []
	W0919 23:23:01.921485  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:23:01.921493  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:23:01.921553  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:23:01.961417  257816 cri.go:89] found id: ""
	I0919 23:23:01.961446  257816 logs.go:282] 0 containers: []
	W0919 23:23:01.961457  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:23:01.961463  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:23:01.961520  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:23:01.999638  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:23:01.999660  257816 cri.go:89] found id: ""
	I0919 23:23:01.999669  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:23:01.999729  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:02.004806  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:23:02.004889  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:23:02.044768  257816 cri.go:89] found id: ""
	I0919 23:23:02.044792  257816 logs.go:282] 0 containers: []
	W0919 23:23:02.044800  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:23:02.044806  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:23:02.044852  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:23:02.087557  257816 cri.go:89] found id: "31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:23:02.087583  257816 cri.go:89] found id: ""
	I0919 23:23:02.087592  257816 logs.go:282] 1 containers: [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074]
	I0919 23:23:02.087641  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:02.091577  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:23:02.091648  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:23:02.130551  257816 cri.go:89] found id: ""
	I0919 23:23:02.130578  257816 logs.go:282] 0 containers: []
	W0919 23:23:02.130588  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:23:02.130595  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:23:02.130655  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:23:02.166193  257816 cri.go:89] found id: ""
	I0919 23:23:02.166218  257816 logs.go:282] 0 containers: []
	W0919 23:23:02.166226  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:23:02.166237  257816 logs.go:123] Gathering logs for kube-apiserver [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390] ...
	I0919 23:23:02.166253  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:23:02.209501  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:23:02.209542  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:23:02.285555  257816 logs.go:123] Gathering logs for kube-controller-manager [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074] ...
	I0919 23:23:02.285593  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:23:02.322701  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:23:02.322732  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:23:02.379074  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:23:02.379119  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:23:02.427750  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:23:02.427777  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:23:02.533867  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:23:02.533924  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:23:02.558587  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:23:02.558629  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:23:02.627751  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:23:05.129261  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:23:05.129722  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:23:05.129780  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:23:05.129835  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:23:05.165639  257816 cri.go:89] found id: "314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:23:05.165657  257816 cri.go:89] found id: ""
	I0919 23:23:05.165665  257816 logs.go:282] 1 containers: [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390]
	I0919 23:23:05.165723  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:05.170166  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:23:05.170258  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:23:05.210247  257816 cri.go:89] found id: ""
	I0919 23:23:05.210279  257816 logs.go:282] 0 containers: []
	W0919 23:23:05.210292  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:23:05.210302  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:23:05.210365  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:23:05.252292  257816 cri.go:89] found id: ""
	I0919 23:23:05.252314  257816 logs.go:282] 0 containers: []
	W0919 23:23:05.252343  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:23:05.252351  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:23:05.252413  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:23:05.290166  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:23:05.290194  257816 cri.go:89] found id: ""
	I0919 23:23:05.290203  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:23:05.290255  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:05.294253  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:23:05.294323  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:23:05.335596  257816 cri.go:89] found id: ""
	I0919 23:23:05.335624  257816 logs.go:282] 0 containers: []
	W0919 23:23:05.335634  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:23:05.335642  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:23:05.335704  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:23:05.380797  257816 cri.go:89] found id: "31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:23:05.380823  257816 cri.go:89] found id: ""
	I0919 23:23:05.380833  257816 logs.go:282] 1 containers: [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074]
	I0919 23:23:05.380909  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:05.385144  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:23:05.385214  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:23:05.429710  257816 cri.go:89] found id: ""
	I0919 23:23:05.429744  257816 logs.go:282] 0 containers: []
	W0919 23:23:05.429755  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:23:05.429765  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:23:05.429833  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:23:05.475196  257816 cri.go:89] found id: ""
	I0919 23:23:05.475229  257816 logs.go:282] 0 containers: []
	W0919 23:23:05.475240  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:23:05.475251  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:23:05.475266  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:23:05.544470  257816 logs.go:123] Gathering logs for kube-controller-manager [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074] ...
	I0919 23:23:05.544505  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:23:05.584696  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:23:05.584724  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:23:05.637466  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:23:05.637513  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:23:05.684509  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:23:05.684548  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:23:05.799540  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:23:05.799575  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:23:05.822871  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:23:05.822916  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:23:05.906683  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:23:05.906743  257816 logs.go:123] Gathering logs for kube-apiserver [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390] ...
	I0919 23:23:05.906764  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:23:06.580638  283801 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:23:06.580718  283801 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:23:06.580848  283801 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:23:06.580931  283801 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:23:06.580992  283801 kubeadm.go:310] OS: Linux
	I0919 23:23:06.581038  283801 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:23:06.581145  283801 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:23:06.581200  283801 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:23:06.581264  283801 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:23:06.581441  283801 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:23:06.581501  283801 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:23:06.581558  283801 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:23:06.581614  283801 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:23:06.581727  283801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:23:06.581836  283801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:23:06.581933  283801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:23:06.582016  283801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:23:06.584192  283801 out.go:252]   - Generating certificates and keys ...
	I0919 23:23:06.584282  283801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:23:06.584375  283801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:23:06.584487  283801 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:23:06.584579  283801 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:23:06.584680  283801 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:23:06.584781  283801 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:23:06.584862  283801 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:23:06.585058  283801 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-523696 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0919 23:23:06.585600  283801 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:23:06.585716  283801 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-523696 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0919 23:23:06.585782  283801 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:23:06.585856  283801 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:23:06.585896  283801 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:23:06.585961  283801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:23:06.586021  283801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:23:06.586170  283801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:23:06.586294  283801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:23:06.586410  283801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:23:06.586487  283801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:23:06.586587  283801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:23:06.586670  283801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:23:06.588719  283801 out.go:252]   - Booting up control plane ...
	I0919 23:23:06.588832  283801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:23:06.588940  283801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:23:06.589035  283801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:23:06.589236  283801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:23:06.589377  283801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:23:06.589616  283801 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:23:06.589819  283801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:23:06.589885  283801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:23:06.590082  283801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:23:06.590282  283801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:23:06.590367  283801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.077104ms
	I0919 23:23:06.590491  283801 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:23:06.590629  283801 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I0919 23:23:06.590748  283801 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:23:06.590875  283801 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:23:06.590977  283801 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.907833474s
	I0919 23:23:06.591076  283801 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.656054382s
	I0919 23:23:06.591186  283801 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.50128624s
	I0919 23:23:06.591364  283801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:23:06.591507  283801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:23:06.591556  283801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:23:06.591745  283801 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-523696 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:23:06.591814  283801 kubeadm.go:310] [bootstrap-token] Using token: gs716u.ekkbhj331z411y8t
	I0919 23:23:06.593749  283801 out.go:252]   - Configuring RBAC rules ...
	I0919 23:23:06.593906  283801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:23:06.594079  283801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:23:06.594267  283801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:23:06.594441  283801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:23:06.594597  283801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:23:06.594744  283801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:23:06.594924  283801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:23:06.594977  283801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:23:06.595042  283801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:23:06.595052  283801 kubeadm.go:310] 
	I0919 23:23:06.595188  283801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:23:06.595204  283801 kubeadm.go:310] 
	I0919 23:23:06.595315  283801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:23:06.595332  283801 kubeadm.go:310] 
	I0919 23:23:06.595376  283801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:23:06.595445  283801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:23:06.595521  283801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:23:06.595532  283801 kubeadm.go:310] 
	I0919 23:23:06.595616  283801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:23:06.595625  283801 kubeadm.go:310] 
	I0919 23:23:06.595694  283801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:23:06.595702  283801 kubeadm.go:310] 
	I0919 23:23:06.595775  283801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:23:06.595887  283801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:23:06.595995  283801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:23:06.596007  283801 kubeadm.go:310] 
	I0919 23:23:06.596138  283801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:23:06.596252  283801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:23:06.596261  283801 kubeadm.go:310] 
	I0919 23:23:06.596399  283801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gs716u.ekkbhj331z411y8t \
	I0919 23:23:06.596569  283801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 \
	I0919 23:23:06.596605  283801 kubeadm.go:310] 	--control-plane 
	I0919 23:23:06.596614  283801 kubeadm.go:310] 
	I0919 23:23:06.596727  283801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:23:06.596737  283801 kubeadm.go:310] 
	I0919 23:23:06.596805  283801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gs716u.ekkbhj331z411y8t \
	I0919 23:23:06.596908  283801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 
	I0919 23:23:06.596918  283801 cni.go:84] Creating CNI manager for ""
	I0919 23:23:06.596924  283801 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:23:06.598689  283801 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W0919 23:23:03.767381  278994 node_ready.go:57] node "embed-certs-756077" has "Ready":"False" status (will retry)
	W0919 23:23:06.267465  278994 node_ready.go:57] node "embed-certs-756077" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.344528268Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=992ebe60-b3ca-418e-be33-f4b400c09084 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.345405362Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-86q9r/dashboard-metrics-scraper" id=e50fbcc0-d94e-4df3-8bc5-b638d72a3bf1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.345507829Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.424763312Z" level=info msg="Created container 6efd414c52465cca582b62a1a6ea49311c816e95c2f8404a7e795bef063cfb0a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-86q9r/dashboard-metrics-scraper" id=e50fbcc0-d94e-4df3-8bc5-b638d72a3bf1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.425427023Z" level=info msg="Starting container: 6efd414c52465cca582b62a1a6ea49311c816e95c2f8404a7e795bef063cfb0a" id=4bcaf4dc-62a7-449c-9bbe-04f7877d151c name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.433163661Z" level=info msg="Started container" PID=2073 containerID=6efd414c52465cca582b62a1a6ea49311c816e95c2f8404a7e795bef063cfb0a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-86q9r/dashboard-metrics-scraper id=4bcaf4dc-62a7-449c-9bbe-04f7877d151c name=/runtime.v1.RuntimeService/StartContainer sandboxID=39ebb5539fb46c6fd467f8f474e91c9b6e85e2b4aecec5b146f7e1321de059a5
	Sep 19 23:22:40 no-preload-042753 crio[563]: time="2025-09-19 23:22:40.450163947Z" level=info msg="Removing container: eb5aaaaf0ca48d9e42971b289931cd4255e5aa0e2267a32bc3fd3744ee35217b" id=7a06cdfd-8862-4b58-9532-de01df9c94b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 23:22:40 no-preload-042753 crio[563]: time="2025-09-19 23:22:40.471649278Z" level=info msg="Removed container eb5aaaaf0ca48d9e42971b289931cd4255e5aa0e2267a32bc3fd3744ee35217b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-86q9r/dashboard-metrics-scraper" id=7a06cdfd-8862-4b58-9532-de01df9c94b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.454601572Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=94f91c7d-b6c4-4d2b-ac7a-cece75ef234b name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.455071430Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651],Size_:31468661,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=94f91c7d-b6c4-4d2b-ac7a-cece75ef234b name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.455888081Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=86a0a6b2-913d-4862-b3e5-a105ec76f294 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.456094705Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651],Size_:31468661,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=86a0a6b2-913d-4862-b3e5-a105ec76f294 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.457808766Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9894f755-f2c9-4099-aa03-3607430c4cab name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.457922656Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.472317763Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e2558af3b2a8d93c2cf722931b478d1c290646b66a872c39c73a5a68cb513781/merged/etc/passwd: no such file or directory"
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.472362840Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e2558af3b2a8d93c2cf722931b478d1c290646b66a872c39c73a5a68cb513781/merged/etc/group: no such file or directory"
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.535761714Z" level=info msg="Created container 33d900fc07a782fe88c632462b36212f394ee86f9b5d46ab2f94d590849b5276: kube-system/storage-provisioner/storage-provisioner" id=9894f755-f2c9-4099-aa03-3607430c4cab name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.536519743Z" level=info msg="Starting container: 33d900fc07a782fe88c632462b36212f394ee86f9b5d46ab2f94d590849b5276" id=feae251b-290c-4d8a-9897-b9451028405a name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.544033782Z" level=info msg="Started container" PID=2143 containerID=33d900fc07a782fe88c632462b36212f394ee86f9b5d46ab2f94d590849b5276 description=kube-system/storage-provisioner/storage-provisioner id=feae251b-290c-4d8a-9897-b9451028405a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f577473836a1dfa34428aa63d6db150e816d45faf2868d4909004d90a38704b7
	Sep 19 23:22:50 no-preload-042753 crio[563]: time="2025-09-19 23:22:50.342831786Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=21767fd8-6b05-4e07-8762-846e1f0a6c10 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:50 no-preload-042753 crio[563]: time="2025-09-19 23:22:50.343046020Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=21767fd8-6b05-4e07-8762-846e1f0a6c10 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:50 no-preload-042753 crio[563]: time="2025-09-19 23:22:50.343755750Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=61e55b3f-cd0b-4ecb-b290-b61280888709 name=/runtime.v1.ImageService/PullImage
	Sep 19 23:22:50 no-preload-042753 crio[563]: time="2025-09-19 23:22:50.409602506Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 23:23:01 no-preload-042753 crio[563]: time="2025-09-19 23:23:01.342565403Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8945447a-61b9-4012-bd00-4c279e314f8f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:23:01 no-preload-042753 crio[563]: time="2025-09-19 23:23:01.342919551Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8945447a-61b9-4012-bd00-4c279e314f8f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	33d900fc07a78       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           28 seconds ago       Running             storage-provisioner         2                   f577473836a1d       storage-provisioner
	6efd414c52465       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago       Exited              dashboard-metrics-scraper   2                   39ebb5539fb46       dashboard-metrics-scraper-6ffb444bf9-86q9r
	8e447cc9d3b20       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   51 seconds ago       Running             kubernetes-dashboard        0                   2c36db05b3ff0       kubernetes-dashboard-855c9754f9-hdlqb
	5c3068fcc8ac4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           59 seconds ago       Running             coredns                     1                   61a70e450ec2f       coredns-66bc5c9577-5jl4c
	d7d9c35dadfbe       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           59 seconds ago       Running             busybox                     1                   2a18e609975e9       busybox
	7dcf5220a7c81       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           59 seconds ago       Running             kindnet-cni                 1                   ca0c9eb5f54b5       kindnet-fzdsg
	f4ddbac3a81f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           59 seconds ago       Exited              storage-provisioner         1                   f577473836a1d       storage-provisioner
	c57f08ebb2421       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                           59 seconds ago       Running             kube-proxy                  1                   ed2435d863a06       kube-proxy-bgkfm
	ce95677f2eeb4       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                           About a minute ago   Running             kube-scheduler              1                   8eb5da44cc874       kube-scheduler-no-preload-042753
	bc12ec0b22189       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        1                   a293320e4700c       etcd-no-preload-042753
	889cfcbec6274       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                           About a minute ago   Running             kube-apiserver              1                   300abba554857       kube-apiserver-no-preload-042753
	11a93459f40ee       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                           About a minute ago   Running             kube-controller-manager     1                   3dc34dd0ac831       kube-controller-manager-no-preload-042753
	
	
	==> coredns [5c3068fcc8ac406d25184b56548159ca9fa994e40d728e7ca23e59518921da2f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34493 - 64162 "HINFO IN 5847359376651155573.6850278063508237149. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012952257s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-042753
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-042753
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=no-preload-042753
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_21_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:21:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-042753
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:23:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:22:40 +0000   Fri, 19 Sep 2025 23:21:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:22:40 +0000   Fri, 19 Sep 2025 23:21:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:22:40 +0000   Fri, 19 Sep 2025 23:21:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:22:40 +0000   Fri, 19 Sep 2025 23:21:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-042753
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 014336e97c2444b2adabeb7e22dc8208
	  System UUID:                4988ced6-3606-4eae-9dae-2b8a811e936b
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-5jl4c                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     115s
	  kube-system                 etcd-no-preload-042753                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m1s
	  kube-system                 kindnet-fzdsg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      116s
	  kube-system                 kube-apiserver-no-preload-042753              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-controller-manager-no-preload-042753     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-bgkfm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-scheduler-no-preload-042753              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 metrics-server-746fcd58dc-p99mj               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         88s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-86q9r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hdlqb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 113s               kube-proxy       
	  Normal  Starting                 58s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node no-preload-042753 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node no-preload-042753 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node no-preload-042753 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           116s               node-controller  Node no-preload-042753 event: Registered Node no-preload-042753 in Controller
	  Normal  NodeReady                101s               kubelet          Node no-preload-042753 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node no-preload-042753 status is now: NodeHasSufficientMemory
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node no-preload-042753 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node no-preload-042753 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node no-preload-042753 event: Registered Node no-preload-042753 in Controller
	  Normal  Starting                 1s                 kubelet          Starting kubelet.
	  Normal  Starting                 1s                 kubelet          Starting kubelet.
	  Normal  Starting                 0s                 kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 23:21] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +2.000740] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.000000] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999317] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.501476] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.499982] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999149] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.001177] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.997827] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.502489] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.499017] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999122] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.003267] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.996866] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.503800] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	
	
	==> etcd [bc12ec0b221899bf739aaf4847c036d4b6534e98dda80f575bb37822c45a1235] <==
	{"level":"warn","ts":"2025-09-19T23:22:09.321514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:22:09.328199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:22:09.335504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:22:09.347929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:22:09.354469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:22:09.361644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43332","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T23:22:32.208295Z","caller":"traceutil/trace.go:172","msg":"trace[1566850666] linearizableReadLoop","detail":"{readStateIndex:686; appliedIndex:686; }","duration":"128.500232ms","start":"2025-09-19T23:22:32.079771Z","end":"2025-09-19T23:22:32.208272Z","steps":["trace[1566850666] 'read index received'  (duration: 128.486635ms)","trace[1566850666] 'applied index is now lower than readState.Index'  (duration: 8.045µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:22:32.337492Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"257.697346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-042753\" limit:1 ","response":"range_response_count:1 size:4830"}
	{"level":"info","ts":"2025-09-19T23:22:32.337582Z","caller":"traceutil/trace.go:172","msg":"trace[1203492330] range","detail":"{range_begin:/registry/minions/no-preload-042753; range_end:; response_count:1; response_revision:653; }","duration":"257.804428ms","start":"2025-09-19T23:22:32.079761Z","end":"2025-09-19T23:22:32.337565Z","steps":["trace[1203492330] 'agreement among raft nodes before linearized reading'  (duration: 128.614764ms)","trace[1203492330] 'range keys from in-memory index tree'  (duration: 128.967573ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:22:32.338041Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.182697ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722595826562871618 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.85.2\" mod_revision:635 > success:<request_put:<key:\"/registry/masterleases/192.168.85.2\" value_size:65 lease:499223789708095807 >> failure:<request_range:<key:\"/registry/masterleases/192.168.85.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:22:32.338173Z","caller":"traceutil/trace.go:172","msg":"trace[1727908659] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"260.200974ms","start":"2025-09-19T23:22:32.077958Z","end":"2025-09-19T23:22:32.338159Z","steps":["trace[1727908659] 'process raft request'  (duration: 130.347519ms)","trace[1727908659] 'compare'  (duration: 129.087586ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:22:32.599466Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.005229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:22:32.599550Z","caller":"traceutil/trace.go:172","msg":"trace[1977638458] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:654; }","duration":"183.099907ms","start":"2025-09-19T23:22:32.416429Z","end":"2025-09-19T23:22:32.599529Z","steps":["trace[1977638458] 'agreement among raft nodes before linearized reading'  (duration: 53.33211ms)","trace[1977638458] 'range keys from in-memory index tree'  (duration: 129.639299ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:22:32.599704Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.837999ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722595826562871624 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-5jl4c.1866d27b79399447\" mod_revision:636 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-5jl4c.1866d27b79399447\" value_size:692 lease:499223789708095111 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-66bc5c9577-5jl4c.1866d27b79399447\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:22:32.599810Z","caller":"traceutil/trace.go:172","msg":"trace[1409306081] transaction","detail":"{read_only:false; response_revision:655; number_of_response:1; }","duration":"223.160756ms","start":"2025-09-19T23:22:32.376626Z","end":"2025-09-19T23:22:32.599787Z","steps":["trace[1409306081] 'process raft request'  (duration: 93.172494ms)","trace[1409306081] 'compare'  (duration: 129.733541ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:22:50.574256Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.295782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:22:50.574345Z","caller":"traceutil/trace.go:172","msg":"trace[1519693429] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:676; }","duration":"157.397477ms","start":"2025-09-19T23:22:50.416931Z","end":"2025-09-19T23:22:50.574329Z","steps":["trace[1519693429] 'agreement among raft nodes before linearized reading'  (duration: 79.868703ms)","trace[1519693429] 'range keys from in-memory index tree'  (duration: 77.378437ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:22:50.574426Z","caller":"traceutil/trace.go:172","msg":"trace[1167440019] transaction","detail":"{read_only:false; response_revision:677; number_of_response:1; }","duration":"224.750681ms","start":"2025-09-19T23:22:50.349654Z","end":"2025-09-19T23:22:50.574405Z","steps":["trace[1167440019] 'process raft request'  (duration: 147.22068ms)","trace[1167440019] 'compare'  (duration: 77.369316ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:22:50.574446Z","caller":"traceutil/trace.go:172","msg":"trace[122641576] transaction","detail":"{read_only:false; response_revision:678; number_of_response:1; }","duration":"107.541166ms","start":"2025-09-19T23:22:50.466894Z","end":"2025-09-19T23:22:50.574435Z","steps":["trace[122641576] 'process raft request'  (duration: 107.477523ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:22:50.940920Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.666925ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:22:50.941357Z","caller":"traceutil/trace.go:172","msg":"trace[546679557] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:680; }","duration":"182.119376ms","start":"2025-09-19T23:22:50.759216Z","end":"2025-09-19T23:22:50.941336Z","steps":["trace[546679557] 'range keys from in-memory index tree'  (duration: 181.617693ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:22:50.941409Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.918045ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722595826562871804 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-3nwtx3o6nyyxnhqgkcblt22eta\" mod_revision:665 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-3nwtx3o6nyyxnhqgkcblt22eta\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-3nwtx3o6nyyxnhqgkcblt22eta\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:22:50.941480Z","caller":"traceutil/trace.go:172","msg":"trace[1836538646] transaction","detail":"{read_only:false; response_revision:681; number_of_response:1; }","duration":"260.581607ms","start":"2025-09-19T23:22:50.680885Z","end":"2025-09-19T23:22:50.941467Z","steps":["trace[1836538646] 'process raft request'  (duration: 125.535727ms)","trace[1836538646] 'compare'  (duration: 134.507865ms)"],"step_count":2}
	2025/09/19 23:23:08 WARNING: [core] [Server #6]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2025/09/19 23:23:08 WARNING: [core] [Server #6]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 23:23:10 up  2:05,  0 users,  load average: 3.15, 2.83, 1.87
	Linux no-preload-042753 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7dcf5220a7c8124462854575c668dbe751600ba3788e0b299366cc89fc5c6e48] <==
	I0919 23:22:11.092127       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:22:11.092475       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0919 23:22:11.092703       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:22:11.092728       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:22:11.092753       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:22:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:22:11.295657       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:22:11.295707       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:22:11.295721       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:22:11.296438       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:22:11.691880       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:22:11.691919       1 metrics.go:72] Registering metrics
	I0919 23:22:11.691995       1 controller.go:711] "Syncing nftables rules"
	I0919 23:22:21.296283       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:22:21.296343       1 main.go:301] handling current node
	I0919 23:22:31.296233       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:22:31.296269       1 main.go:301] handling current node
	I0919 23:22:41.296307       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:22:41.296355       1 main.go:301] handling current node
	I0919 23:22:51.296328       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:22:51.296379       1 main.go:301] handling current node
	I0919 23:23:01.303704       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:23:01.303749       1 main.go:301] handling current node
	
	
	==> kube-apiserver [889cfcbec62741aad18885cb57ecac40e49d488bdf845746042edfccc8daa851] <==
	W0919 23:22:10.901204       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:22:10.901241       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:22:10.901257       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0919 23:22:10.901401       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:22:10.902694       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:22:11.937459       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0919 23:22:14.534639       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:22:14.733017       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:22:14.782726       1 controller.go:667] quota admission added evaluator for: endpoints
	{"level":"warn","ts":"2025-09-19T23:23:08.035984Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0016503c0/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	{"level":"warn","ts":"2025-09-19T23:23:08.035998Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001a1af00/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 23:23:08.036515       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0919 23:23:08.036370       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.036629       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.036429       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 105.123µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 23:23:08.037767       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.037810       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.037819       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.038921       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.039049       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.68348ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	E0919 23:23:08.039064       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.941637ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-042753" result=null
	I0919 23:23:09.541242       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	
	
	==> kube-controller-manager [11a93459f40ee7ac60c8b016660d54641b26f897a1f1e9f042ddb9290811062f] <==
	I0919 23:22:14.181083       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:22:14.181397       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 23:22:14.183038       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 23:22:14.185282       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 23:22:14.207770       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 23:22:14.210092       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 23:22:14.214367       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 23:22:14.215766       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 23:22:14.228537       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 23:22:14.229741       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 23:22:14.229768       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 23:22:14.229935       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0919 23:22:14.230042       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0919 23:22:14.230097       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0919 23:22:14.230202       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 23:22:14.230293       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 23:22:14.230573       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 23:22:14.237942       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:22:14.247198       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:22:14.251358       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0919 23:22:14.253592       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:22:14.253731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:22:14.740735       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	E0919 23:22:44.243639       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:22:44.261965       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [c57f08ebb2421ca234dd183d49e85973ee1e5dae991c13b067b0b303f8250382] <==
	I0919 23:22:10.915352       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:22:10.967404       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:22:11.068251       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:22:11.068281       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0919 23:22:11.068371       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:22:11.091924       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:22:11.092012       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:22:11.098709       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:22:11.099556       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:22:11.099582       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:22:11.101367       1 config.go:200] "Starting service config controller"
	I0919 23:22:11.101684       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:22:11.101769       1 config.go:309] "Starting node config controller"
	I0919 23:22:11.101791       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:22:11.102510       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:22:11.102043       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:22:11.102609       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:22:11.102021       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:22:11.102641       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:22:11.202347       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:22:11.203586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:22:11.203600       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ce95677f2eeb4f804753d5cd028d512a58256be3cfc52031593fea5a8cde0340] <==
	I0919 23:22:08.414937       1 serving.go:386] Generated self-signed cert in-memory
	W0919 23:22:09.837040       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 23:22:09.837084       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 23:22:09.837096       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 23:22:09.837128       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 23:22:09.862181       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:22:09.862208       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:22:09.865416       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:22:09.865647       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:22:09.866181       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:22:09.865682       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:22:09.966349       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:23:09 no-preload-042753 kubelet[2633]: E0919 23:23:09.524413    2633 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"no-preload-042753\" not found"
	Sep 19 23:23:09 no-preload-042753 kubelet[2633]: I0919 23:23:09.525239    2633 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	Sep 19 23:23:09 no-preload-042753 kubelet[2633]: I0919 23:23:09.526671    2633 reconciler.go:29] "Reconciler: start to sync state"
	Sep 19 23:23:09 no-preload-042753 kubelet[2633]: I0919 23:23:09.527576    2633 factory.go:223] Registration of the systemd container factory successfully
	Sep 19 23:23:09 no-preload-042753 kubelet[2633]: I0919 23:23:09.533843    2633 server.go:310] "Adding debug handlers to kubelet server"
	Sep 19 23:23:09 no-preload-042753 kubelet[2633]: I0919 23:23:09.534371    2633 factory.go:223] Registration of the crio container factory successfully
	Sep 19 23:23:09 no-preload-042753 kubelet[2633]: I0919 23:23:09.538285    2633 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	Sep 19 23:23:09 no-preload-042753 kubelet[2633]: E0919 23:23:09.538308    2633 manager.go:294] Registration of the raw container factory failed: inotify_init: too many open files
	Sep 19 23:23:09 no-preload-042753 kubelet[2633]: E0919 23:23:09.538325    2633 kubelet.go:1686] "Failed to start cAdvisor" err="inotify_init: too many open files"
	Sep 19 23:23:09 no-preload-042753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 19 23:23:09 no-preload-042753 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Sep 19 23:23:10 no-preload-042753 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Sep 19 23:23:10 no-preload-042753 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Sep 19 23:23:10 no-preload-042753 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Sep 19 23:23:10 no-preload-042753 kubelet[2832]: Flag --cgroups-per-qos has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Sep 19 23:23:10 no-preload-042753 kubelet[2832]: Flag --enforce-node-allocatable has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.
	Sep 19 23:23:10 no-preload-042753 kubelet[2832]: I0919 23:23:10.269746    2832 server.go:529] "Kubelet version" kubeletVersion="v1.34.0"
	Sep 19 23:23:10 no-preload-042753 kubelet[2832]: I0919 23:23:10.269812    2832 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	Sep 19 23:23:10 no-preload-042753 kubelet[2832]: I0919 23:23:10.269833    2832 watchdog_linux.go:95] "Systemd watchdog is not enabled"
	Sep 19 23:23:10 no-preload-042753 kubelet[2832]: I0919 23:23:10.269842    2832 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started."
	Sep 19 23:23:10 no-preload-042753 kubelet[2832]: I0919 23:23:10.270047    2832 server.go:956] "Client rotation is on, will bootstrap in background"
	Sep 19 23:23:10 no-preload-042753 kubelet[2832]: I0919 23:23:10.271093    2832 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem"
	Sep 19 23:23:10 no-preload-042753 kubelet[2832]: I0919 23:23:10.272855    2832 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Sep 19 23:23:10 no-preload-042753 kubelet[2832]: E0919 23:23:10.276622    2832 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService"
	Sep 19 23:23:10 no-preload-042753 kubelet[2832]: I0919 23:23:10.276715    2832 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config."
	
	
	==> kubernetes-dashboard [8e447cc9d3b20c0b9fadcfe22402d8f0da4f460c388920cd360017de22c1d27c] <==
	2025/09/19 23:22:18 Using namespace: kubernetes-dashboard
	2025/09/19 23:22:18 Using in-cluster config to connect to apiserver
	2025/09/19 23:22:18 Using secret token for csrf signing
	2025/09/19 23:22:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:22:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:22:18 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 23:22:18 Generating JWE encryption key
	2025/09/19 23:22:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:22:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:22:18 Initializing JWE encryption key from synchronized object
	2025/09/19 23:22:18 Creating in-cluster Sidecar client
	2025/09/19 23:22:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:22:18 Serving insecurely on HTTP port: 9090
	2025/09/19 23:22:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:22:18 Starting overwatch
	
	
	==> storage-provisioner [33d900fc07a782fe88c632462b36212f394ee86f9b5d46ab2f94d590849b5276] <==
	I0919 23:22:41.569788       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 23:22:41.569843       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0919 23:22:41.572647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:45.029028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:49.289053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:52.887484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:55.941276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:58.964729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:58.973635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:22:58.973819       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 23:22:58.973912       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"05ef580f-ffae-4b3d-9189-3655be70accb", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-042753_5855eaec-4cf1-4c5f-bc72-0e44bbef2a1d became leader
	I0919 23:22:58.973998       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-042753_5855eaec-4cf1-4c5f-bc72-0e44bbef2a1d!
	W0919 23:22:58.977677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:58.994562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:22:59.075237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-042753_5855eaec-4cf1-4c5f-bc72-0e44bbef2a1d!
	W0919 23:23:00.997996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:01.001665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:03.005008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:03.010513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:05.013306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:05.017613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:07.737535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:07.742504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:09.746358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:09.751320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f4ddbac3a81f0ed31d22f6b78a0824a09338a27187fc4ba2b7e7957cdcae6f30] <==
	I0919 23:22:10.863872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:22:40.866771       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-042753 -n no-preload-042753
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-042753 -n no-preload-042753: exit status 2 (329.74799ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-042753 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-p99mj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-042753 describe pod metrics-server-746fcd58dc-p99mj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-042753 describe pod metrics-server-746fcd58dc-p99mj: exit status 1 (69.934566ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-p99mj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-042753 describe pod metrics-server-746fcd58dc-p99mj: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-042753
helpers_test.go:243: (dbg) docker inspect no-preload-042753:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124",
	        "Created": "2025-09-19T23:20:40.557817758Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 273282,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:22:00.903639308Z",
	            "FinishedAt": "2025-09-19T23:21:59.900192386Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124/hostname",
	        "HostsPath": "/var/lib/docker/containers/d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124/hosts",
	        "LogPath": "/var/lib/docker/containers/d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124/d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124-json.log",
	        "Name": "/no-preload-042753",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-042753:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-042753",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d4496fbe8d25568b5020c838a8a29cd1ffabb602e020c326b28ec0248f0ec124",
	                "LowerDir": "/var/lib/docker/overlay2/a9082df055617c8d08fd115aa364b874c5bad34b880b38b0b4863d9a57bacaee-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/a9082df055617c8d08fd115aa364b874c5bad34b880b38b0b4863d9a57bacaee/merged",
	                "UpperDir": "/var/lib/docker/overlay2/a9082df055617c8d08fd115aa364b874c5bad34b880b38b0b4863d9a57bacaee/diff",
	                "WorkDir": "/var/lib/docker/overlay2/a9082df055617c8d08fd115aa364b874c5bad34b880b38b0b4863d9a57bacaee/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-042753",
	                "Source": "/var/lib/docker/volumes/no-preload-042753/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-042753",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-042753",
	                "name.minikube.sigs.k8s.io": "no-preload-042753",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9877137170d9a52c3acde345442271113bb96cdcf3ee547304175bdcf70eaedf",
	            "SandboxKey": "/var/run/docker/netns/9877137170d9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33074"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33075"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33078"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33076"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33077"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-042753": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "3e:86:c9:d6:bb:f4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "953cfdb974f23a5422d321f747f028d933c5997145eda1e683201f237462ca50",
	                    "EndpointID": "55f7280fb6f38b0ca7c886c8630a83ca7041b172a504851292f5a1cc4cfd017d",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-042753",
	                        "d4496fbe8d25"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-042753 -n no-preload-042753
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-042753 -n no-preload-042753: exit status 2 (359.939493ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-042753 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-042753 logs -n 25: (1.616906475s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p kubernetes-upgrade-496007                                                                                                                                                                                                                  │ kubernetes-upgrade-496007    │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │ 19 Sep 25 23:20 UTC │
	│ start   │ -p kubernetes-upgrade-496007 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-496007    │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │                     │
	│ delete  │ -p missing-upgrade-322300                                                                                                                                                                                                                     │ missing-upgrade-322300       │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │ 19 Sep 25 23:20 UTC │
	│ start   │ -p no-preload-042753 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:20 UTC │ 19 Sep 25 23:21 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-131186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:21 UTC │
	│ stop    │ -p old-k8s-version-131186 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:21 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-131186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:21 UTC │
	│ start   │ -p old-k8s-version-131186 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:22 UTC │
	│ addons  │ enable metrics-server -p no-preload-042753 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:21 UTC │
	│ stop    │ -p no-preload-042753 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:21 UTC │ 19 Sep 25 23:22 UTC │
	│ addons  │ enable dashboard -p no-preload-042753 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ start   │ -p no-preload-042753 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                       │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ image   │ old-k8s-version-131186 image list --format=json                                                                                                                                                                                               │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ pause   │ -p old-k8s-version-131186 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ unpause │ -p old-k8s-version-131186 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ delete  │ -p old-k8s-version-131186                                                                                                                                                                                                                     │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ delete  │ -p old-k8s-version-131186                                                                                                                                                                                                                     │ old-k8s-version-131186       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ start   │ -p embed-certs-756077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │                     │
	│ start   │ -p cert-expiration-463082 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-463082       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ delete  │ -p cert-expiration-463082                                                                                                                                                                                                                     │ cert-expiration-463082       │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ delete  │ -p disable-driver-mounts-815969                                                                                                                                                                                                               │ disable-driver-mounts-815969 │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │ 19 Sep 25 23:22 UTC │
	│ start   │ -p default-k8s-diff-port-523696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-523696 │ jenkins │ v1.37.0 │ 19 Sep 25 23:22 UTC │                     │
	│ image   │ no-preload-042753 image list --format=json                                                                                                                                                                                                    │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ pause   │ -p no-preload-042753 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ unpause │ -p no-preload-042753 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:22:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:22:45.595913  283801 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:22:45.596300  283801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:22:45.596314  283801 out.go:374] Setting ErrFile to fd 2...
	I0919 23:22:45.596320  283801 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:22:45.596650  283801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 23:22:45.597836  283801 out.go:368] Setting JSON to false
	I0919 23:22:45.600344  283801 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7516,"bootTime":1758316650,"procs":763,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:22:45.600483  283801 start.go:140] virtualization: kvm guest
	I0919 23:22:45.602800  283801 out.go:179] * [default-k8s-diff-port-523696] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:22:45.604356  283801 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:22:45.604416  283801 notify.go:220] Checking for updates...
	I0919 23:22:45.607047  283801 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:22:45.608835  283801 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:22:45.610025  283801 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 23:22:45.611338  283801 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:22:45.613443  283801 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	W0919 23:22:41.543989  272997 pod_ready.go:104] pod "coredns-66bc5c9577-5jl4c" is not "Ready", error: <nil>
	W0919 23:22:44.041707  272997 pod_ready.go:104] pod "coredns-66bc5c9577-5jl4c" is not "Ready", error: <nil>
	I0919 23:22:45.615620  283801 config.go:182] Loaded profile config "embed-certs-756077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:22:45.615751  283801 config.go:182] Loaded profile config "kubernetes-upgrade-496007": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:22:45.615913  283801 config.go:182] Loaded profile config "no-preload-042753": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:22:45.616059  283801 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:22:45.652598  283801 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:22:45.652707  283801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:22:45.727444  283801 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 23:22:45.715390918 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:22:45.727557  283801 docker.go:318] overlay module found
	I0919 23:22:45.729602  283801 out.go:179] * Using the docker driver based on user configuration
	I0919 23:22:45.731157  283801 start.go:304] selected driver: docker
	I0919 23:22:45.731178  283801 start.go:918] validating driver "docker" against <nil>
	I0919 23:22:45.731194  283801 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:22:45.731820  283801 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:22:45.808411  283801 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:76 SystemTime:2025-09-19 23:22:45.795341118 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:22:45.808655  283801 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 23:22:45.808919  283801 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:22:45.813276  283801 out.go:179] * Using Docker driver with root privileges
	I0919 23:22:45.817282  283801 cni.go:84] Creating CNI manager for ""
	I0919 23:22:45.817379  283801 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:22:45.817396  283801 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 23:22:45.817497  283801 start.go:348] cluster config:
	{Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0919 23:22:45.819141  283801 out.go:179] * Starting "default-k8s-diff-port-523696" primary control-plane node in "default-k8s-diff-port-523696" cluster
	I0919 23:22:45.820325  283801 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 23:22:45.821800  283801 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:22:45.823206  283801 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:22:45.823238  283801 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:22:45.823262  283801 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 23:22:45.823275  283801 cache.go:58] Caching tarball of preloaded images
	I0919 23:22:45.823389  283801 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 23:22:45.823406  283801 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 23:22:45.823516  283801 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/config.json ...
	I0919 23:22:45.823540  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/config.json: {Name:mk094da9574bd9890ccffdc8df893ebc18aee319 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:45.857175  283801 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:22:45.857201  283801 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:22:45.857220  283801 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:22:45.857251  283801 start.go:360] acquireMachinesLock for default-k8s-diff-port-523696: {Name:mk3e8cf47fc7b3222021a2ea03ba5708af5f316a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:22:45.857368  283801 start.go:364] duration metric: took 96.802µs to acquireMachinesLock for "default-k8s-diff-port-523696"
	I0919 23:22:45.857401  283801 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:22:45.857493  283801 start.go:125] createHost starting for "" (driver="docker")
	I0919 23:22:41.739404  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0919 23:22:41.739471  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:22:41.739533  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:22:41.785110  257816 cri.go:89] found id: "314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:41.785136  257816 cri.go:89] found id: "2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	I0919 23:22:41.785143  257816 cri.go:89] found id: ""
	I0919 23:22:41.785152  257816 logs.go:282] 2 containers: [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba]
	I0919 23:22:41.785207  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:41.789586  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:41.793327  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:22:41.793391  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:22:41.834505  257816 cri.go:89] found id: ""
	I0919 23:22:41.834531  257816 logs.go:282] 0 containers: []
	W0919 23:22:41.834541  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:22:41.834548  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:22:41.834601  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:22:41.876620  257816 cri.go:89] found id: ""
	I0919 23:22:41.876649  257816 logs.go:282] 0 containers: []
	W0919 23:22:41.876659  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:22:41.876667  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:22:41.876722  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:22:41.917202  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:41.917227  257816 cri.go:89] found id: ""
	I0919 23:22:41.917237  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:22:41.917304  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:41.921398  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:22:41.921463  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:22:41.963433  257816 cri.go:89] found id: ""
	I0919 23:22:41.963460  257816 logs.go:282] 0 containers: []
	W0919 23:22:41.963471  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:22:41.963478  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:22:41.963526  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:22:42.004741  257816 cri.go:89] found id: "31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:42.004766  257816 cri.go:89] found id: "a1d0bf430e75c8875928d0d3245a97b7045ff5818ced3b6ed7b44b24affe0dc0"
	I0919 23:22:42.004776  257816 cri.go:89] found id: ""
	I0919 23:22:42.004801  257816 logs.go:282] 2 containers: [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074 a1d0bf430e75c8875928d0d3245a97b7045ff5818ced3b6ed7b44b24affe0dc0]
	I0919 23:22:42.004869  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:42.009032  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:42.015263  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:22:42.015338  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:22:42.068053  257816 cri.go:89] found id: ""
	I0919 23:22:42.068084  257816 logs.go:282] 0 containers: []
	W0919 23:22:42.068094  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:22:42.068115  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:22:42.068174  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:22:42.122936  257816 cri.go:89] found id: ""
	I0919 23:22:42.122962  257816 logs.go:282] 0 containers: []
	W0919 23:22:42.122973  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:22:42.122992  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:22:42.123007  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:22:42.222851  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:22:42.222887  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0919 23:22:45.859755  283801 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I0919 23:22:45.860002  283801 start.go:159] libmachine.API.Create for "default-k8s-diff-port-523696" (driver="docker")
	I0919 23:22:45.860038  283801 client.go:168] LocalClient.Create starting
	I0919 23:22:45.860140  283801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem
	I0919 23:22:45.860177  283801 main.go:141] libmachine: Decoding PEM data...
	I0919 23:22:45.860192  283801 main.go:141] libmachine: Parsing certificate...
	I0919 23:22:45.860248  283801 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem
	I0919 23:22:45.860275  283801 main.go:141] libmachine: Decoding PEM data...
	I0919 23:22:45.860283  283801 main.go:141] libmachine: Parsing certificate...
	I0919 23:22:45.860595  283801 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0919 23:22:45.881526  283801 cli_runner.go:211] docker network inspect default-k8s-diff-port-523696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0919 23:22:45.881627  283801 network_create.go:284] running [docker network inspect default-k8s-diff-port-523696] to gather additional debugging logs...
	I0919 23:22:45.881652  283801 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523696
	W0919 23:22:45.904655  283801 cli_runner.go:211] docker network inspect default-k8s-diff-port-523696 returned with exit code 1
	I0919 23:22:45.904691  283801 network_create.go:287] error running [docker network inspect default-k8s-diff-port-523696]: docker network inspect default-k8s-diff-port-523696: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-523696 not found
	I0919 23:22:45.904714  283801 network_create.go:289] output of [docker network inspect default-k8s-diff-port-523696]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-523696 not found
	
	** /stderr **
	I0919 23:22:45.904810  283801 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:22:45.925215  283801 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8b1b6c79ac61 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:3e:90:cd:d5:3a} reservation:<nil>}
	I0919 23:22:45.925942  283801 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-20306adbc8e7 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:96:2f:5e:f5:4d:ee} reservation:<nil>}
	I0919 23:22:45.926646  283801 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e3bc7e48275b IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:f6:f1:66:e9:e5:54} reservation:<nil>}
	I0919 23:22:45.927471  283801 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001d23ca0}
	I0919 23:22:45.927494  283801 network_create.go:124] attempt to create docker network default-k8s-diff-port-523696 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0919 23:22:45.927559  283801 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-523696 default-k8s-diff-port-523696
	I0919 23:22:45.999193  283801 network_create.go:108] docker network default-k8s-diff-port-523696 192.168.76.0/24 created
	I0919 23:22:45.999231  283801 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-523696" container
	I0919 23:22:45.999318  283801 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0919 23:22:46.020910  283801 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-523696 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523696 --label created_by.minikube.sigs.k8s.io=true
	I0919 23:22:46.043037  283801 oci.go:103] Successfully created a docker volume default-k8s-diff-port-523696
	I0919 23:22:46.043169  283801 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-523696-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523696 --entrypoint /usr/bin/test -v default-k8s-diff-port-523696:/var gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -d /var/lib
	I0919 23:22:46.516794  283801 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-523696
	I0919 23:22:46.516832  283801 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:22:46.516865  283801 kic.go:194] Starting extracting preloaded images to volume ...
	I0919 23:22:46.516943  283801 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-523696:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir
	W0919 23:22:46.541945  272997 pod_ready.go:104] pod "coredns-66bc5c9577-5jl4c" is not "Ready", error: <nil>
	W0919 23:22:49.044069  272997 pod_ready.go:104] pod "coredns-66bc5c9577-5jl4c" is not "Ready", error: <nil>
	I0919 23:22:50.668432  278994 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:22:50.668528  278994 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:22:50.668662  278994 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:22:50.668733  278994 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:22:50.668780  278994 kubeadm.go:310] OS: Linux
	I0919 23:22:50.668861  278994 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:22:50.668933  278994 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:22:50.669004  278994 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:22:50.669077  278994 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:22:50.669161  278994 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:22:50.669211  278994 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:22:50.669254  278994 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:22:50.669292  278994 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:22:50.669380  278994 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:22:50.669519  278994 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:22:50.669630  278994 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:22:50.669709  278994 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:22:50.806400  278994 out.go:252]   - Generating certificates and keys ...
	I0919 23:22:50.806510  278994 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:22:50.806621  278994 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:22:50.806711  278994 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:22:50.806857  278994 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:22:50.806971  278994 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:22:50.807032  278994 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:22:50.807088  278994 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:22:50.807233  278994 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [embed-certs-756077 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0919 23:22:50.807297  278994 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:22:50.807423  278994 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-756077 localhost] and IPs [192.168.103.2 127.0.0.1 ::1]
	I0919 23:22:50.807489  278994 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:22:50.807565  278994 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:22:50.807612  278994 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:22:50.807677  278994 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:22:50.807748  278994 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:22:50.807817  278994 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:22:50.807898  278994 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:22:50.807980  278994 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:22:50.808035  278994 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:22:50.808186  278994 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:22:50.808314  278994 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:22:50.944612  278994 out.go:252]   - Booting up control plane ...
	I0919 23:22:50.944749  278994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:22:50.944847  278994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:22:50.944951  278994 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:22:50.945123  278994 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:22:50.945287  278994 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:22:50.945456  278994 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:22:50.945588  278994 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:22:50.945660  278994 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:22:50.945868  278994 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:22:50.946023  278994 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:22:50.946097  278994 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.079086ms
	I0919 23:22:50.946255  278994 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:22:50.946400  278994 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I0919 23:22:50.946538  278994 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:22:50.946653  278994 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:22:50.946765  278994 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.40569387s
	I0919 23:22:50.946915  278994 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 2.944914128s
	I0919 23:22:50.947015  278994 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.503168258s
	I0919 23:22:50.947183  278994 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:22:50.947347  278994 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:22:50.947433  278994 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:22:50.947708  278994 kubeadm.go:310] [mark-control-plane] Marking the node embed-certs-756077 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:22:50.947779  278994 kubeadm.go:310] [bootstrap-token] Using token: 05dt5h.no6yext1q5butvcd
	I0919 23:22:50.950871  278994 out.go:252]   - Configuring RBAC rules ...
	I0919 23:22:50.951051  278994 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:22:50.951203  278994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:22:50.951406  278994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:22:50.951568  278994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:22:50.951689  278994 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:22:50.951804  278994 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:22:50.952051  278994 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:22:50.952129  278994 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:22:50.952196  278994 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:22:50.952205  278994 kubeadm.go:310] 
	I0919 23:22:50.952262  278994 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:22:50.952275  278994 kubeadm.go:310] 
	I0919 23:22:50.952369  278994 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:22:50.952426  278994 kubeadm.go:310] 
	I0919 23:22:50.952508  278994 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:22:50.952638  278994 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:22:50.952734  278994 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:22:50.952745  278994 kubeadm.go:310] 
	I0919 23:22:50.952809  278994 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:22:50.952815  278994 kubeadm.go:310] 
	I0919 23:22:50.952875  278994 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:22:50.952887  278994 kubeadm.go:310] 
	I0919 23:22:50.952968  278994 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:22:50.953202  278994 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:22:50.953340  278994 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:22:50.953365  278994 kubeadm.go:310] 
	I0919 23:22:50.953570  278994 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:22:50.953689  278994 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:22:50.953696  278994 kubeadm.go:310] 
	I0919 23:22:50.953805  278994 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token 05dt5h.no6yext1q5butvcd \
	I0919 23:22:50.953932  278994 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 \
	I0919 23:22:50.953959  278994 kubeadm.go:310] 	--control-plane 
	I0919 23:22:50.953963  278994 kubeadm.go:310] 
	I0919 23:22:50.954120  278994 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:22:50.954128  278994 kubeadm.go:310] 
	I0919 23:22:50.954237  278994 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token 05dt5h.no6yext1q5butvcd \
	I0919 23:22:50.954381  278994 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 
	I0919 23:22:50.954392  278994 cni.go:84] Creating CNI manager for ""
	I0919 23:22:50.954402  278994 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:22:50.958294  278994 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I0919 23:22:50.961078  278994 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 23:22:50.966992  278994 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:22:50.967027  278994 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 23:22:50.995512  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:22:51.269484  278994 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:22:51.269602  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:51.269685  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-756077 minikube.k8s.io/updated_at=2025_09_19T23_22_51_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=embed-certs-756077 minikube.k8s.io/primary=true
	I0919 23:22:51.356378  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:51.368546  278994 ops.go:34] apiserver oom_adj: -16
	I0919 23:22:51.856941  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:52.357350  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:52.857348  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W0919 23:22:51.044396  272997 pod_ready.go:104] pod "coredns-66bc5c9577-5jl4c" is not "Ready", error: <nil>
	I0919 23:22:52.541643  272997 pod_ready.go:94] pod "coredns-66bc5c9577-5jl4c" is "Ready"
	I0919 23:22:52.541668  272997 pod_ready.go:86] duration metric: took 40.505567802s for pod "coredns-66bc5c9577-5jl4c" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.544465  272997 pod_ready.go:83] waiting for pod "etcd-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.548683  272997 pod_ready.go:94] pod "etcd-no-preload-042753" is "Ready"
	I0919 23:22:52.548708  272997 pod_ready.go:86] duration metric: took 4.220664ms for pod "etcd-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.551162  272997 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.556198  272997 pod_ready.go:94] pod "kube-apiserver-no-preload-042753" is "Ready"
	I0919 23:22:52.556228  272997 pod_ready.go:86] duration metric: took 5.043479ms for pod "kube-apiserver-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.558721  272997 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.739379  272997 pod_ready.go:94] pod "kube-controller-manager-no-preload-042753" is "Ready"
	I0919 23:22:52.739421  272997 pod_ready.go:86] duration metric: took 180.670358ms for pod "kube-controller-manager-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:52.940072  272997 pod_ready.go:83] waiting for pod "kube-proxy-bgkfm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:53.339684  272997 pod_ready.go:94] pod "kube-proxy-bgkfm" is "Ready"
	I0919 23:22:53.339716  272997 pod_ready.go:86] duration metric: took 399.589997ms for pod "kube-proxy-bgkfm" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:53.539747  272997 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:53.939792  272997 pod_ready.go:94] pod "kube-scheduler-no-preload-042753" is "Ready"
	I0919 23:22:53.939821  272997 pod_ready.go:86] duration metric: took 400.050647ms for pod "kube-scheduler-no-preload-042753" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:22:53.939836  272997 pod_ready.go:40] duration metric: took 41.90921689s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:22:53.995997  272997 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:22:53.997748  272997 out.go:179] * Done! kubectl is now configured to use "no-preload-042753" cluster and "default" namespace by default
	I0919 23:22:50.970795  283801 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-523696:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.453777091s)
	I0919 23:22:50.970834  283801 kic.go:203] duration metric: took 4.453965068s to extract preloaded images to volume ...
	W0919 23:22:50.970939  283801 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0919 23:22:50.970979  283801 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I0919 23:22:50.971020  283801 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0919 23:22:51.047347  283801 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-523696 --name default-k8s-diff-port-523696 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-523696 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-523696 --network default-k8s-diff-port-523696 --ip 192.168.76.2 --volume default-k8s-diff-port-523696:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1
	I0919 23:22:51.398204  283801 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Running}}
	I0919 23:22:51.423380  283801 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:22:51.446056  283801 cli_runner.go:164] Run: docker exec default-k8s-diff-port-523696 stat /var/lib/dpkg/alternatives/iptables
	I0919 23:22:51.501886  283801 oci.go:144] the created container "default-k8s-diff-port-523696" has a running status.
	I0919 23:22:51.501916  283801 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa...
	I0919 23:22:51.550645  283801 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0919 23:22:51.583303  283801 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:22:51.605265  283801 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0919 23:22:51.605289  283801 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-523696 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0919 23:22:51.670333  283801 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:22:51.698050  283801 machine.go:93] provisionDockerMachine start ...
	I0919 23:22:51.698163  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:51.725188  283801 main.go:141] libmachine: Using SSH client type: native
	I0919 23:22:51.725624  283801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I0919 23:22:51.725648  283801 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:22:51.871213  283801 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523696
	
	I0919 23:22:51.871246  283801 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-523696"
	I0919 23:22:51.871338  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:51.896996  283801 main.go:141] libmachine: Using SSH client type: native
	I0919 23:22:51.897302  283801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I0919 23:22:51.897331  283801 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-523696 && echo "default-k8s-diff-port-523696" | sudo tee /etc/hostname
	I0919 23:22:52.053414  283801 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523696
	
	I0919 23:22:52.053491  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:52.074331  283801 main.go:141] libmachine: Using SSH client type: native
	I0919 23:22:52.074601  283801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I0919 23:22:52.074631  283801 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-523696' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-523696/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-523696' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:22:52.213027  283801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:22:52.213062  283801 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 23:22:52.213151  283801 ubuntu.go:190] setting up certificates
	I0919 23:22:52.213166  283801 provision.go:84] configureAuth start
	I0919 23:22:52.213261  283801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:22:52.231215  283801 provision.go:143] copyHostCerts
	I0919 23:22:52.231283  283801 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 23:22:52.231296  283801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 23:22:52.231390  283801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 23:22:52.231551  283801 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 23:22:52.231566  283801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 23:22:52.231606  283801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 23:22:52.231687  283801 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 23:22:52.231697  283801 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 23:22:52.231736  283801 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 23:22:52.231824  283801 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-523696 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-523696 localhost minikube]
	I0919 23:22:52.685747  283801 provision.go:177] copyRemoteCerts
	I0919 23:22:52.685816  283801 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:22:52.685861  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:52.704970  283801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:22:52.803041  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:22:52.831600  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0919 23:22:52.858378  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0919 23:22:52.887800  283801 provision.go:87] duration metric: took 674.617806ms to configureAuth
	I0919 23:22:52.887832  283801 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:22:52.888001  283801 config.go:182] Loaded profile config "default-k8s-diff-port-523696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:22:52.888096  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:52.908462  283801 main.go:141] libmachine: Using SSH client type: native
	I0919 23:22:52.908689  283801 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33084 <nil> <nil>}
	I0919 23:22:52.908711  283801 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 23:22:53.161476  283801 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 23:22:53.161503  283801 machine.go:96] duration metric: took 1.463430077s to provisionDockerMachine
	I0919 23:22:53.161520  283801 client.go:171] duration metric: took 7.301466529s to LocalClient.Create
	I0919 23:22:53.161541  283801 start.go:167] duration metric: took 7.301538688s to libmachine.API.Create "default-k8s-diff-port-523696"
	I0919 23:22:53.161549  283801 start.go:293] postStartSetup for "default-k8s-diff-port-523696" (driver="docker")
	I0919 23:22:53.161566  283801 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:22:53.161627  283801 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:22:53.161662  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:53.181146  283801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:22:53.282060  283801 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:22:53.285696  283801 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:22:53.285738  283801 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:22:53.285748  283801 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:22:53.285755  283801 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:22:53.285766  283801 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 23:22:53.285833  283801 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 23:22:53.285930  283801 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 23:22:53.286090  283801 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:22:53.296191  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 23:22:53.325713  283801 start.go:296] duration metric: took 164.145408ms for postStartSetup
	I0919 23:22:53.326128  283801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:22:53.344693  283801 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/config.json ...
	I0919 23:22:53.344984  283801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:22:53.345028  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:53.363816  283801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:22:53.458166  283801 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:22:53.462892  283801 start.go:128] duration metric: took 7.605384435s to createHost
	I0919 23:22:53.462920  283801 start.go:83] releasing machines lock for "default-k8s-diff-port-523696", held for 7.605536132s
	I0919 23:22:53.463009  283801 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:22:53.481448  283801 ssh_runner.go:195] Run: cat /version.json
	I0919 23:22:53.481507  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:53.481532  283801 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:22:53.481610  283801 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:22:53.502356  283801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:22:53.502788  283801 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33084 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:22:53.596611  283801 ssh_runner.go:195] Run: systemctl --version
	I0919 23:22:53.674184  283801 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 23:22:53.818922  283801 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:22:53.823893  283801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:22:53.848892  283801 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:22:53.848977  283801 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:22:53.883792  283801 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0919 23:22:53.883820  283801 start.go:495] detecting cgroup driver to use...
	I0919 23:22:53.883868  283801 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:22:53.883915  283801 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:22:53.901387  283801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:22:53.914502  283801 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:22:53.914565  283801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:22:53.932453  283801 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:22:53.952211  283801 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:22:54.032454  283801 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:22:54.120602  283801 docker.go:234] disabling docker service ...
	I0919 23:22:54.120684  283801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:22:54.139858  283801 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:22:54.154159  283801 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:22:54.229745  283801 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:22:54.420914  283801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:22:54.434730  283801 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:22:54.453483  283801 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 23:22:54.453558  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.467233  283801 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 23:22:54.467296  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.478437  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.489333  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.500662  283801 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:22:54.511714  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.522985  283801 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.540467  283801 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:22:54.552149  283801 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:22:54.561457  283801 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:22:54.571524  283801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:22:54.639993  283801 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 23:22:54.749200  283801 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 23:22:54.749294  283801 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 23:22:54.754519  283801 start.go:563] Will wait 60s for crictl version
	I0919 23:22:54.754593  283801 ssh_runner.go:195] Run: which crictl
	I0919 23:22:54.758837  283801 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:22:54.794349  283801 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 23:22:54.794444  283801 ssh_runner.go:195] Run: crio --version
	I0919 23:22:54.832527  283801 ssh_runner.go:195] Run: crio --version
	I0919 23:22:54.875403  283801 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 23:22:53.356801  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:53.856908  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:54.357010  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:54.857181  278994 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:22:54.936054  278994 kubeadm.go:1105] duration metric: took 3.666502786s to wait for elevateKubeSystemPrivileges
	I0919 23:22:54.936088  278994 kubeadm.go:394] duration metric: took 15.878020809s to StartCluster
	I0919 23:22:54.936125  278994 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:54.936215  278994 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:22:54.937766  278994 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:54.938034  278994 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:22:54.938060  278994 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:22:54.938168  278994 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:22:54.938272  278994 config.go:182] Loaded profile config "embed-certs-756077": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:22:54.938264  278994 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-756077"
	I0919 23:22:54.938301  278994 addons.go:238] Setting addon storage-provisioner=true in "embed-certs-756077"
	I0919 23:22:54.938756  278994 host.go:66] Checking if "embed-certs-756077" exists ...
	I0919 23:22:54.938783  278994 addons.go:69] Setting default-storageclass=true in profile "embed-certs-756077"
	I0919 23:22:54.938826  278994 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-756077"
	I0919 23:22:54.939448  278994 cli_runner.go:164] Run: docker container inspect embed-certs-756077 --format={{.State.Status}}
	I0919 23:22:54.939705  278994 cli_runner.go:164] Run: docker container inspect embed-certs-756077 --format={{.State.Status}}
	I0919 23:22:54.941840  278994 out.go:179] * Verifying Kubernetes components...
	I0919 23:22:54.944232  278994 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:22:54.969686  278994 addons.go:238] Setting addon default-storageclass=true in "embed-certs-756077"
	I0919 23:22:54.969732  278994 host.go:66] Checking if "embed-certs-756077" exists ...
	I0919 23:22:54.970380  278994 cli_runner.go:164] Run: docker container inspect embed-certs-756077 --format={{.State.Status}}
	I0919 23:22:54.970557  278994 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:22:54.976983  278994 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:22:54.977009  278994 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:22:54.977123  278994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756077
	I0919 23:22:55.001926  278994 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:22:55.001953  278994 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:22:55.002125  278994 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-756077
	I0919 23:22:55.005386  278994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/embed-certs-756077/id_rsa Username:docker}
	I0919 23:22:55.027432  278994 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33079 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/embed-certs-756077/id_rsa Username:docker}
	I0919 23:22:55.045999  278994 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:22:55.077518  278994 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:22:55.141609  278994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:22:55.152579  278994 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:22:55.261828  278994 start.go:976] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I0919 23:22:55.263261  278994 node_ready.go:35] waiting up to 6m0s for node "embed-certs-756077" to be "Ready" ...
	I0919 23:22:55.518814  278994 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:22:54.876787  283801 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:22:54.897253  283801 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0919 23:22:54.901429  283801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:22:54.914909  283801 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:22:54.915053  283801 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:22:54.915138  283801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:22:55.027207  283801 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 23:22:55.027231  283801 crio.go:433] Images already preloaded, skipping extraction
	I0919 23:22:55.027287  283801 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:22:55.082804  283801 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 23:22:55.082839  283801 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:22:55.082849  283801 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 crio true true} ...
	I0919 23:22:55.083037  283801 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-523696 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:22:55.083170  283801 ssh_runner.go:195] Run: crio config
	I0919 23:22:55.149830  283801 cni.go:84] Creating CNI manager for ""
	I0919 23:22:55.149858  283801 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:22:55.149871  283801 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:22:55.149897  283801 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-523696 NodeName:default-k8s-diff-port-523696 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:22:55.150064  283801 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-523696"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:22:55.150142  283801 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:22:55.161859  283801 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:22:55.161914  283801 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:22:55.174514  283801 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0919 23:22:55.200808  283801 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:22:55.229639  283801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0919 23:22:55.253552  283801 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:22:55.258542  283801 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:22:55.275361  283801 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:22:55.365446  283801 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:22:55.388230  283801 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696 for IP: 192.168.76.2
	I0919 23:22:55.388257  283801 certs.go:194] generating shared ca certs ...
	I0919 23:22:55.388277  283801 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:55.388450  283801 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 23:22:55.388501  283801 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 23:22:55.388514  283801 certs.go:256] generating profile certs ...
	I0919 23:22:55.388582  283801 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.key
	I0919 23:22:55.388598  283801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.crt with IP's: []
	I0919 23:22:55.491075  283801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.crt ...
	I0919 23:22:55.491124  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.crt: {Name:mk84fb74a6745b447df98b265e5b3c1639ecbc3f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:55.491406  283801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.key ...
	I0919 23:22:55.491456  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.key: {Name:mk1118e088979888764eed348877b43632df1aa0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:55.491604  283801 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key.3ddce01e
	I0919 23:22:55.491628  283801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt.3ddce01e with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I0919 23:22:52.297243  257816 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.074328978s)
	W0919 23:22:52.297295  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I0919 23:22:52.297307  257816 logs.go:123] Gathering logs for kube-apiserver [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390] ...
	I0919 23:22:52.297321  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:52.345170  257816 logs.go:123] Gathering logs for kube-apiserver [2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba] ...
	I0919 23:22:52.345222  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	I0919 23:22:52.394986  257816 logs.go:123] Gathering logs for kube-controller-manager [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074] ...
	I0919 23:22:52.395033  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:52.444079  257816 logs.go:123] Gathering logs for kube-controller-manager [a1d0bf430e75c8875928d0d3245a97b7045ff5818ced3b6ed7b44b24affe0dc0] ...
	I0919 23:22:52.444143  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a1d0bf430e75c8875928d0d3245a97b7045ff5818ced3b6ed7b44b24affe0dc0"
	I0919 23:22:52.483830  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:22:52.483856  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:22:52.525769  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:22:52.525806  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:22:52.545275  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:22:52.545300  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:52.618883  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:22:52.618920  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:22:55.160688  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:22:55.161219  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:22:55.161297  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:22:55.161376  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:22:55.212699  257816 cri.go:89] found id: "314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:55.212733  257816 cri.go:89] found id: "2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	I0919 23:22:55.212739  257816 cri.go:89] found id: ""
	I0919 23:22:55.212749  257816 logs.go:282] 2 containers: [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba]
	I0919 23:22:55.212807  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:55.218135  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:55.223313  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:22:55.223389  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:22:55.274293  257816 cri.go:89] found id: ""
	I0919 23:22:55.274322  257816 logs.go:282] 0 containers: []
	W0919 23:22:55.274331  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:22:55.274339  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:22:55.274407  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:22:55.321135  257816 cri.go:89] found id: ""
	I0919 23:22:55.321165  257816 logs.go:282] 0 containers: []
	W0919 23:22:55.321176  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:22:55.321184  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:22:55.321248  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:22:55.363240  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:55.363266  257816 cri.go:89] found id: ""
	I0919 23:22:55.363276  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:22:55.363344  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:55.368142  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:22:55.368209  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:22:55.422049  257816 cri.go:89] found id: ""
	I0919 23:22:55.422079  257816 logs.go:282] 0 containers: []
	W0919 23:22:55.422089  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:22:55.422097  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:22:55.422184  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:22:55.473271  257816 cri.go:89] found id: "31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:55.473393  257816 cri.go:89] found id: ""
	I0919 23:22:55.473411  257816 logs.go:282] 1 containers: [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074]
	I0919 23:22:55.473494  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:55.479176  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:22:55.479299  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:22:55.530375  257816 cri.go:89] found id: ""
	I0919 23:22:55.530402  257816 logs.go:282] 0 containers: []
	W0919 23:22:55.530412  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:22:55.530419  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:22:55.530477  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:22:55.574195  257816 cri.go:89] found id: ""
	I0919 23:22:55.574228  257816 logs.go:282] 0 containers: []
	W0919 23:22:55.574239  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:22:55.574257  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:22:55.574279  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:22:55.652936  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:22:55.652960  257816 logs.go:123] Gathering logs for kube-apiserver [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390] ...
	I0919 23:22:55.652977  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:55.700673  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:22:55.700705  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:55.776574  257816 logs.go:123] Gathering logs for kube-controller-manager [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074] ...
	I0919 23:22:55.776616  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:55.825085  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:22:55.825137  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:22:55.873634  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:22:55.873672  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:22:55.916948  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:22:55.916984  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:22:55.520432  278994 addons.go:514] duration metric: took 582.260798ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:22:55.766300  278994 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-756077" context rescaled to 1 replicas
	W0919 23:22:57.267427  278994 node_ready.go:57] node "embed-certs-756077" has "Ready":"False" status (will retry)
	I0919 23:22:55.978492  283801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt.3ddce01e ...
	I0919 23:22:55.978523  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt.3ddce01e: {Name:mk984ddd05acd5e1e36fb52bba3da8de3378e2a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:55.978711  283801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key.3ddce01e ...
	I0919 23:22:55.978727  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key.3ddce01e: {Name:mka6692298239dcc8c1eff437a73ac5078ad7789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:55.978821  283801 certs.go:381] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt.3ddce01e -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt
	I0919 23:22:55.978936  283801 certs.go:385] copying /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key.3ddce01e -> /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key
	I0919 23:22:55.979028  283801 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key
	I0919 23:22:55.979061  283801 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.crt with IP's: []
	I0919 23:22:56.104762  283801 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.crt ...
	I0919 23:22:56.104802  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.crt: {Name:mkaaaddcc9511ddaf8101dd2778396387c9f0120 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:56.105020  283801 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key ...
	I0919 23:22:56.105042  283801 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key: {Name:mk6e247b15616b2ba853c2b32e9875074a5777ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:22:56.105316  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 23:22:56.105377  283801 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 23:22:56.105388  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:22:56.105420  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:22:56.105446  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:22:56.105473  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 23:22:56.105527  283801 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 23:22:56.106333  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:22:56.136307  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 23:22:56.166269  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:22:56.197270  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 23:22:56.232817  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 23:22:56.267968  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:22:56.302225  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:22:56.329369  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:22:56.357268  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 23:22:56.391291  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:22:56.419600  283801 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 23:22:56.448910  283801 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:22:56.472201  283801 ssh_runner.go:195] Run: openssl version
	I0919 23:22:56.478518  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 23:22:56.490827  283801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 23:22:56.495619  283801 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 23:22:56.495679  283801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 23:22:56.503861  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:22:56.515223  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:22:56.526847  283801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:22:56.531858  283801 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:22:56.531919  283801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:22:56.539257  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:22:56.549509  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 23:22:56.560429  283801 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 23:22:56.564500  283801 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 23:22:56.564562  283801 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 23:22:56.572479  283801 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 23:22:56.584023  283801 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:22:56.587805  283801 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0919 23:22:56.587860  283801 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Sock
etVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:22:56.587940  283801 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 23:22:56.588001  283801 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:22:56.625836  283801 cri.go:89] found id: ""
	I0919 23:22:56.625908  283801 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:22:56.636429  283801 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:22:56.646739  283801 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:22:56.646801  283801 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:22:56.657786  283801 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:22:56.657808  283801 kubeadm.go:157] found existing configuration files:
	
	I0919 23:22:56.657854  283801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0919 23:22:56.667999  283801 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:22:56.668060  283801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:22:56.677503  283801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0919 23:22:56.687116  283801 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:22:56.687196  283801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:22:56.695842  283801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0919 23:22:56.705483  283801 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:22:56.705544  283801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:22:56.714929  283801 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0919 23:22:56.724270  283801 kubeadm.go:163] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:22:56.724354  283801 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:22:56.734083  283801 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:22:56.792357  283801 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:22:56.853438  283801 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:22:56.020455  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:22:56.020490  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:22:56.039034  257816 logs.go:123] Gathering logs for kube-apiserver [2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba] ...
	I0919 23:22:56.039076  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	W0919 23:22:56.077680  257816 logs.go:130] failed kube-apiserver [2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba]: command: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba" /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba": Process exited with status 1
	stdout:
	
	stderr:
	E0919 23:22:56.074657    4282 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba\": container with ID starting with 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba not found: ID does not exist" containerID="2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	time="2025-09-19T23:22:56Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba\": container with ID starting with 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba not found: ID does not exist"
	 output: 
	** stderr ** 
	E0919 23:22:56.074657    4282 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba\": container with ID starting with 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba not found: ID does not exist" containerID="2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba"
	time="2025-09-19T23:22:56Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba\": container with ID starting with 2b2cfd2de5e93769f7b589ca6f58e0ba13734fea8c3750cdd38a6befc2c616ba not found: ID does not exist"
	
	** /stderr **
	I0919 23:22:58.579314  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:22:58.579784  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:22:58.579841  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:22:58.579899  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:22:58.616892  257816 cri.go:89] found id: "314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:58.616924  257816 cri.go:89] found id: ""
	I0919 23:22:58.616934  257816 logs.go:282] 1 containers: [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390]
	I0919 23:22:58.617003  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:58.621687  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:22:58.621752  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:22:58.659451  257816 cri.go:89] found id: ""
	I0919 23:22:58.659475  257816 logs.go:282] 0 containers: []
	W0919 23:22:58.659483  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:22:58.659488  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:22:58.659547  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:22:58.697871  257816 cri.go:89] found id: ""
	I0919 23:22:58.697895  257816 logs.go:282] 0 containers: []
	W0919 23:22:58.697904  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:22:58.697912  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:22:58.697967  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:22:58.737444  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:58.737470  257816 cri.go:89] found id: ""
	I0919 23:22:58.737479  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:22:58.737531  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:58.741438  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:22:58.741504  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:22:58.781147  257816 cri.go:89] found id: ""
	I0919 23:22:58.781173  257816 logs.go:282] 0 containers: []
	W0919 23:22:58.781183  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:22:58.781189  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:22:58.781244  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:22:58.822000  257816 cri.go:89] found id: "31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:58.822029  257816 cri.go:89] found id: ""
	I0919 23:22:58.822038  257816 logs.go:282] 1 containers: [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074]
	I0919 23:22:58.822132  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:22:58.826176  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:22:58.826240  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:22:58.870692  257816 cri.go:89] found id: ""
	I0919 23:22:58.870721  257816 logs.go:282] 0 containers: []
	W0919 23:22:58.870732  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:22:58.870740  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:22:58.870803  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:22:58.907037  257816 cri.go:89] found id: ""
	I0919 23:22:58.907060  257816 logs.go:282] 0 containers: []
	W0919 23:22:58.907068  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:22:58.907075  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:22:58.907093  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:22:58.924701  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:22:58.924728  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:22:58.996196  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:22:58.996223  257816 logs.go:123] Gathering logs for kube-apiserver [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390] ...
	I0919 23:22:58.996237  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:22:59.041860  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:22:59.041887  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:22:59.106571  257816 logs.go:123] Gathering logs for kube-controller-manager [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074] ...
	I0919 23:22:59.106602  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:22:59.145274  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:22:59.145297  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:22:59.195054  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:22:59.195089  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:22:59.241289  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:22:59.241316  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0919 23:22:59.267677  278994 node_ready.go:57] node "embed-certs-756077" has "Ready":"False" status (will retry)
	W0919 23:23:01.766814  278994 node_ready.go:57] node "embed-certs-756077" has "Ready":"False" status (will retry)
	I0919 23:23:01.833127  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:23:01.833556  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:23:01.833609  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:23:01.833662  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:23:01.872807  257816 cri.go:89] found id: "314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:23:01.872827  257816 cri.go:89] found id: ""
	I0919 23:23:01.872834  257816 logs.go:282] 1 containers: [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390]
	I0919 23:23:01.872886  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:01.876742  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:23:01.876809  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:23:01.921442  257816 cri.go:89] found id: ""
	I0919 23:23:01.921475  257816 logs.go:282] 0 containers: []
	W0919 23:23:01.921485  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:23:01.921493  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:23:01.921553  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:23:01.961417  257816 cri.go:89] found id: ""
	I0919 23:23:01.961446  257816 logs.go:282] 0 containers: []
	W0919 23:23:01.961457  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:23:01.961463  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:23:01.961520  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:23:01.999638  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:23:01.999660  257816 cri.go:89] found id: ""
	I0919 23:23:01.999669  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:23:01.999729  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:02.004806  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:23:02.004889  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:23:02.044768  257816 cri.go:89] found id: ""
	I0919 23:23:02.044792  257816 logs.go:282] 0 containers: []
	W0919 23:23:02.044800  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:23:02.044806  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:23:02.044852  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:23:02.087557  257816 cri.go:89] found id: "31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:23:02.087583  257816 cri.go:89] found id: ""
	I0919 23:23:02.087592  257816 logs.go:282] 1 containers: [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074]
	I0919 23:23:02.087641  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:02.091577  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:23:02.091648  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:23:02.130551  257816 cri.go:89] found id: ""
	I0919 23:23:02.130578  257816 logs.go:282] 0 containers: []
	W0919 23:23:02.130588  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:23:02.130595  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:23:02.130655  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:23:02.166193  257816 cri.go:89] found id: ""
	I0919 23:23:02.166218  257816 logs.go:282] 0 containers: []
	W0919 23:23:02.166226  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:23:02.166237  257816 logs.go:123] Gathering logs for kube-apiserver [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390] ...
	I0919 23:23:02.166253  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:23:02.209501  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:23:02.209542  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:23:02.285555  257816 logs.go:123] Gathering logs for kube-controller-manager [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074] ...
	I0919 23:23:02.285593  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:23:02.322701  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:23:02.322732  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:23:02.379074  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:23:02.379119  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:23:02.427750  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:23:02.427777  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:23:02.533867  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:23:02.533924  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:23:02.558587  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:23:02.558629  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:23:02.627751  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:23:05.129261  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:23:05.129722  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:23:05.129780  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:23:05.129835  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:23:05.165639  257816 cri.go:89] found id: "314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:23:05.165657  257816 cri.go:89] found id: ""
	I0919 23:23:05.165665  257816 logs.go:282] 1 containers: [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390]
	I0919 23:23:05.165723  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:05.170166  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:23:05.170258  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:23:05.210247  257816 cri.go:89] found id: ""
	I0919 23:23:05.210279  257816 logs.go:282] 0 containers: []
	W0919 23:23:05.210292  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:23:05.210302  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:23:05.210365  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:23:05.252292  257816 cri.go:89] found id: ""
	I0919 23:23:05.252314  257816 logs.go:282] 0 containers: []
	W0919 23:23:05.252343  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:23:05.252351  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:23:05.252413  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:23:05.290166  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:23:05.290194  257816 cri.go:89] found id: ""
	I0919 23:23:05.290203  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:23:05.290255  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:05.294253  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:23:05.294323  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:23:05.335596  257816 cri.go:89] found id: ""
	I0919 23:23:05.335624  257816 logs.go:282] 0 containers: []
	W0919 23:23:05.335634  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:23:05.335642  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:23:05.335704  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:23:05.380797  257816 cri.go:89] found id: "31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:23:05.380823  257816 cri.go:89] found id: ""
	I0919 23:23:05.380833  257816 logs.go:282] 1 containers: [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074]
	I0919 23:23:05.380909  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:05.385144  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:23:05.385214  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:23:05.429710  257816 cri.go:89] found id: ""
	I0919 23:23:05.429744  257816 logs.go:282] 0 containers: []
	W0919 23:23:05.429755  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:23:05.429765  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:23:05.429833  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:23:05.475196  257816 cri.go:89] found id: ""
	I0919 23:23:05.475229  257816 logs.go:282] 0 containers: []
	W0919 23:23:05.475240  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:23:05.475251  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:23:05.475266  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:23:05.544470  257816 logs.go:123] Gathering logs for kube-controller-manager [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074] ...
	I0919 23:23:05.544505  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:23:05.584696  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:23:05.584724  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:23:05.637466  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:23:05.637513  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:23:05.684509  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:23:05.684548  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:23:05.799540  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:23:05.799575  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:23:05.822871  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:23:05.822916  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:23:05.906683  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:23:05.906743  257816 logs.go:123] Gathering logs for kube-apiserver [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390] ...
	I0919 23:23:05.906764  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:23:06.580638  283801 kubeadm.go:310] [init] Using Kubernetes version: v1.34.0
	I0919 23:23:06.580718  283801 kubeadm.go:310] [preflight] Running pre-flight checks
	I0919 23:23:06.580848  283801 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0919 23:23:06.580931  283801 kubeadm.go:310] KERNEL_VERSION: 6.8.0-1037-gcp
	I0919 23:23:06.580992  283801 kubeadm.go:310] OS: Linux
	I0919 23:23:06.581038  283801 kubeadm.go:310] CGROUPS_CPU: enabled
	I0919 23:23:06.581145  283801 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0919 23:23:06.581200  283801 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0919 23:23:06.581264  283801 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0919 23:23:06.581441  283801 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0919 23:23:06.581501  283801 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0919 23:23:06.581558  283801 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0919 23:23:06.581614  283801 kubeadm.go:310] CGROUPS_IO: enabled
	I0919 23:23:06.581727  283801 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0919 23:23:06.581836  283801 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0919 23:23:06.581933  283801 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0919 23:23:06.582016  283801 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0919 23:23:06.584192  283801 out.go:252]   - Generating certificates and keys ...
	I0919 23:23:06.584282  283801 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0919 23:23:06.584375  283801 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0919 23:23:06.584487  283801 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0919 23:23:06.584579  283801 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0919 23:23:06.584680  283801 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0919 23:23:06.584781  283801 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0919 23:23:06.584862  283801 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0919 23:23:06.585058  283801 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-523696 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0919 23:23:06.585600  283801 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0919 23:23:06.585716  283801 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-523696 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I0919 23:23:06.585782  283801 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0919 23:23:06.585856  283801 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0919 23:23:06.585896  283801 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0919 23:23:06.585961  283801 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0919 23:23:06.586021  283801 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0919 23:23:06.586170  283801 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0919 23:23:06.586294  283801 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0919 23:23:06.586410  283801 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0919 23:23:06.586487  283801 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0919 23:23:06.586587  283801 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0919 23:23:06.586670  283801 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0919 23:23:06.588719  283801 out.go:252]   - Booting up control plane ...
	I0919 23:23:06.588832  283801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0919 23:23:06.588940  283801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0919 23:23:06.589035  283801 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0919 23:23:06.589236  283801 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0919 23:23:06.589377  283801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I0919 23:23:06.589616  283801 kubeadm.go:310] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I0919 23:23:06.589819  283801 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0919 23:23:06.589885  283801 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0919 23:23:06.590082  283801 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0919 23:23:06.590282  283801 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0919 23:23:06.590367  283801 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 502.077104ms
	I0919 23:23:06.590491  283801 kubeadm.go:310] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I0919 23:23:06.590629  283801 kubeadm.go:310] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I0919 23:23:06.590748  283801 kubeadm.go:310] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I0919 23:23:06.590875  283801 kubeadm.go:310] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I0919 23:23:06.590977  283801 kubeadm.go:310] [control-plane-check] kube-controller-manager is healthy after 1.907833474s
	I0919 23:23:06.591076  283801 kubeadm.go:310] [control-plane-check] kube-scheduler is healthy after 2.656054382s
	I0919 23:23:06.591186  283801 kubeadm.go:310] [control-plane-check] kube-apiserver is healthy after 4.50128624s
	I0919 23:23:06.591364  283801 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0919 23:23:06.591507  283801 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0919 23:23:06.591556  283801 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0919 23:23:06.591745  283801 kubeadm.go:310] [mark-control-plane] Marking the node default-k8s-diff-port-523696 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0919 23:23:06.591814  283801 kubeadm.go:310] [bootstrap-token] Using token: gs716u.ekkbhj331z411y8t
	I0919 23:23:06.593749  283801 out.go:252]   - Configuring RBAC rules ...
	I0919 23:23:06.593906  283801 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0919 23:23:06.594079  283801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0919 23:23:06.594267  283801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0919 23:23:06.594441  283801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0919 23:23:06.594597  283801 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0919 23:23:06.594744  283801 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0919 23:23:06.594924  283801 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0919 23:23:06.594977  283801 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0919 23:23:06.595042  283801 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0919 23:23:06.595052  283801 kubeadm.go:310] 
	I0919 23:23:06.595188  283801 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0919 23:23:06.595204  283801 kubeadm.go:310] 
	I0919 23:23:06.595315  283801 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0919 23:23:06.595332  283801 kubeadm.go:310] 
	I0919 23:23:06.595376  283801 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0919 23:23:06.595445  283801 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0919 23:23:06.595521  283801 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0919 23:23:06.595532  283801 kubeadm.go:310] 
	I0919 23:23:06.595616  283801 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0919 23:23:06.595625  283801 kubeadm.go:310] 
	I0919 23:23:06.595694  283801 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0919 23:23:06.595702  283801 kubeadm.go:310] 
	I0919 23:23:06.595775  283801 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0919 23:23:06.595887  283801 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0919 23:23:06.595995  283801 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0919 23:23:06.596007  283801 kubeadm.go:310] 
	I0919 23:23:06.596138  283801 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0919 23:23:06.596252  283801 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0919 23:23:06.596261  283801 kubeadm.go:310] 
	I0919 23:23:06.596399  283801 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8444 --token gs716u.ekkbhj331z411y8t \
	I0919 23:23:06.596569  283801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 \
	I0919 23:23:06.596605  283801 kubeadm.go:310] 	--control-plane 
	I0919 23:23:06.596614  283801 kubeadm.go:310] 
	I0919 23:23:06.596727  283801 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0919 23:23:06.596737  283801 kubeadm.go:310] 
	I0919 23:23:06.596805  283801 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8444 --token gs716u.ekkbhj331z411y8t \
	I0919 23:23:06.596908  283801 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:098d60743c09d0d8fa46dee3571b812ff83422b440550703c225b91785bf99c3 
	I0919 23:23:06.596918  283801 cni.go:84] Creating CNI manager for ""
	I0919 23:23:06.596924  283801 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:23:06.598689  283801 out.go:179] * Configuring CNI (Container Networking Interface) ...
	W0919 23:23:03.767381  278994 node_ready.go:57] node "embed-certs-756077" has "Ready":"False" status (will retry)
	W0919 23:23:06.267465  278994 node_ready.go:57] node "embed-certs-756077" has "Ready":"False" status (will retry)
	I0919 23:23:06.600228  283801 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 23:23:06.605008  283801 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:23:06.605031  283801 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 23:23:06.627671  283801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:23:06.873326  283801 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:23:06.873427  283801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:06.873467  283801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-523696 minikube.k8s.io/updated_at=2025_09_19T23_23_06_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=default-k8s-diff-port-523696 minikube.k8s.io/primary=true
	I0919 23:23:06.882745  283801 ops.go:34] apiserver oom_adj: -16
	I0919 23:23:06.998327  283801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:07.498912  283801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:07.999219  283801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:08.499401  283801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:08.998489  283801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:09.499219  283801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:09.999002  283801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:10.499228  283801 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:23:08.450168  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:23:08.450653  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:23:08.450732  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:23:08.450776  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:23:08.487402  257816 cri.go:89] found id: "314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:23:08.487429  257816 cri.go:89] found id: ""
	I0919 23:23:08.487437  257816 logs.go:282] 1 containers: [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390]
	I0919 23:23:08.487496  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:08.491761  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:23:08.491841  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:23:08.534526  257816 cri.go:89] found id: ""
	I0919 23:23:08.534558  257816 logs.go:282] 0 containers: []
	W0919 23:23:08.534569  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:23:08.534576  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:23:08.534629  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:23:08.587151  257816 cri.go:89] found id: ""
	I0919 23:23:08.587183  257816 logs.go:282] 0 containers: []
	W0919 23:23:08.587195  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:23:08.587204  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:23:08.587265  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:23:08.627054  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:23:08.627073  257816 cri.go:89] found id: ""
	I0919 23:23:08.627082  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:23:08.627145  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:08.630911  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:23:08.630976  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:23:08.673852  257816 cri.go:89] found id: ""
	I0919 23:23:08.673883  257816 logs.go:282] 0 containers: []
	W0919 23:23:08.673895  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:23:08.673916  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:23:08.673980  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:23:08.715425  257816 cri.go:89] found id: "31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	I0919 23:23:08.715454  257816 cri.go:89] found id: ""
	I0919 23:23:08.715464  257816 logs.go:282] 1 containers: [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074]
	I0919 23:23:08.715520  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:23:08.719881  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:23:08.719933  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:23:08.772425  257816 cri.go:89] found id: ""
	I0919 23:23:08.772452  257816 logs.go:282] 0 containers: []
	W0919 23:23:08.772463  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:23:08.772471  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:23:08.772520  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:23:08.820163  257816 cri.go:89] found id: ""
	I0919 23:23:08.820191  257816 logs.go:282] 0 containers: []
	W0919 23:23:08.820202  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:23:08.820212  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:23:08.820226  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:23:08.865762  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:23:08.865792  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:23:08.910085  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:23:08.910144  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:23:09.016604  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:23:09.016632  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:23:09.036662  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:23:09.036697  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:23:09.111537  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:23:09.111561  257816 logs.go:123] Gathering logs for kube-apiserver [314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390] ...
	I0919 23:23:09.111573  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 314ed0d695be1629b1bb6d7461a90a6c5e56dae1d06c2960234c45c3ef3d6390"
	I0919 23:23:09.159383  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:23:09.159410  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:23:09.234374  257816 logs.go:123] Gathering logs for kube-controller-manager [31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074] ...
	I0919 23:23:09.234411  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 31acbae60abe26aea3098ecef3f10d0f532d57e8fb00d64024103f5041f4f074"
	
	
	==> CRI-O <==
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.344528268Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=992ebe60-b3ca-418e-be33-f4b400c09084 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.345405362Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-86q9r/dashboard-metrics-scraper" id=e50fbcc0-d94e-4df3-8bc5-b638d72a3bf1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.345507829Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.424763312Z" level=info msg="Created container 6efd414c52465cca582b62a1a6ea49311c816e95c2f8404a7e795bef063cfb0a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-86q9r/dashboard-metrics-scraper" id=e50fbcc0-d94e-4df3-8bc5-b638d72a3bf1 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.425427023Z" level=info msg="Starting container: 6efd414c52465cca582b62a1a6ea49311c816e95c2f8404a7e795bef063cfb0a" id=4bcaf4dc-62a7-449c-9bbe-04f7877d151c name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 23:22:39 no-preload-042753 crio[563]: time="2025-09-19 23:22:39.433163661Z" level=info msg="Started container" PID=2073 containerID=6efd414c52465cca582b62a1a6ea49311c816e95c2f8404a7e795bef063cfb0a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-86q9r/dashboard-metrics-scraper id=4bcaf4dc-62a7-449c-9bbe-04f7877d151c name=/runtime.v1.RuntimeService/StartContainer sandboxID=39ebb5539fb46c6fd467f8f474e91c9b6e85e2b4aecec5b146f7e1321de059a5
	Sep 19 23:22:40 no-preload-042753 crio[563]: time="2025-09-19 23:22:40.450163947Z" level=info msg="Removing container: eb5aaaaf0ca48d9e42971b289931cd4255e5aa0e2267a32bc3fd3744ee35217b" id=7a06cdfd-8862-4b58-9532-de01df9c94b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 23:22:40 no-preload-042753 crio[563]: time="2025-09-19 23:22:40.471649278Z" level=info msg="Removed container eb5aaaaf0ca48d9e42971b289931cd4255e5aa0e2267a32bc3fd3744ee35217b: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-86q9r/dashboard-metrics-scraper" id=7a06cdfd-8862-4b58-9532-de01df9c94b6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.454601572Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=94f91c7d-b6c4-4d2b-ac7a-cece75ef234b name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.455071430Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651],Size_:31468661,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=94f91c7d-b6c4-4d2b-ac7a-cece75ef234b name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.455888081Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=86a0a6b2-913d-4862-b3e5-a105ec76f294 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.456094705Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651],Size_:31468661,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=86a0a6b2-913d-4862-b3e5-a105ec76f294 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.457808766Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=9894f755-f2c9-4099-aa03-3607430c4cab name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.457922656Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.472317763Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/e2558af3b2a8d93c2cf722931b478d1c290646b66a872c39c73a5a68cb513781/merged/etc/passwd: no such file or directory"
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.472362840Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/e2558af3b2a8d93c2cf722931b478d1c290646b66a872c39c73a5a68cb513781/merged/etc/group: no such file or directory"
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.535761714Z" level=info msg="Created container 33d900fc07a782fe88c632462b36212f394ee86f9b5d46ab2f94d590849b5276: kube-system/storage-provisioner/storage-provisioner" id=9894f755-f2c9-4099-aa03-3607430c4cab name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.536519743Z" level=info msg="Starting container: 33d900fc07a782fe88c632462b36212f394ee86f9b5d46ab2f94d590849b5276" id=feae251b-290c-4d8a-9897-b9451028405a name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 23:22:41 no-preload-042753 crio[563]: time="2025-09-19 23:22:41.544033782Z" level=info msg="Started container" PID=2143 containerID=33d900fc07a782fe88c632462b36212f394ee86f9b5d46ab2f94d590849b5276 description=kube-system/storage-provisioner/storage-provisioner id=feae251b-290c-4d8a-9897-b9451028405a name=/runtime.v1.RuntimeService/StartContainer sandboxID=f577473836a1dfa34428aa63d6db150e816d45faf2868d4909004d90a38704b7
	Sep 19 23:22:50 no-preload-042753 crio[563]: time="2025-09-19 23:22:50.342831786Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=21767fd8-6b05-4e07-8762-846e1f0a6c10 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:50 no-preload-042753 crio[563]: time="2025-09-19 23:22:50.343046020Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=21767fd8-6b05-4e07-8762-846e1f0a6c10 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:22:50 no-preload-042753 crio[563]: time="2025-09-19 23:22:50.343755750Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=61e55b3f-cd0b-4ecb-b290-b61280888709 name=/runtime.v1.ImageService/PullImage
	Sep 19 23:22:50 no-preload-042753 crio[563]: time="2025-09-19 23:22:50.409602506Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 23:23:01 no-preload-042753 crio[563]: time="2025-09-19 23:23:01.342565403Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=8945447a-61b9-4012-bd00-4c279e314f8f name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:23:01 no-preload-042753 crio[563]: time="2025-09-19 23:23:01.342919551Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=8945447a-61b9-4012-bd00-4c279e314f8f name=/runtime.v1.ImageService/ImageStatus
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	33d900fc07a78       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           30 seconds ago       Running             storage-provisioner         2                   f577473836a1d       storage-provisioner
	6efd414c52465       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   39ebb5539fb46       dashboard-metrics-scraper-6ffb444bf9-86q9r
	8e447cc9d3b20       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   53 seconds ago       Running             kubernetes-dashboard        0                   2c36db05b3ff0       kubernetes-dashboard-855c9754f9-hdlqb
	5c3068fcc8ac4       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           About a minute ago   Running             coredns                     1                   61a70e450ec2f       coredns-66bc5c9577-5jl4c
	d7d9c35dadfbe       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           About a minute ago   Running             busybox                     1                   2a18e609975e9       busybox
	7dcf5220a7c81       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           About a minute ago   Running             kindnet-cni                 1                   ca0c9eb5f54b5       kindnet-fzdsg
	f4ddbac3a81f0       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           About a minute ago   Exited              storage-provisioner         1                   f577473836a1d       storage-provisioner
	c57f08ebb2421       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                           About a minute ago   Running             kube-proxy                  1                   ed2435d863a06       kube-proxy-bgkfm
	ce95677f2eeb4       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                           About a minute ago   Running             kube-scheduler              1                   8eb5da44cc874       kube-scheduler-no-preload-042753
	bc12ec0b22189       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        1                   a293320e4700c       etcd-no-preload-042753
	889cfcbec6274       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                           About a minute ago   Running             kube-apiserver              1                   300abba554857       kube-apiserver-no-preload-042753
	11a93459f40ee       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                           About a minute ago   Running             kube-controller-manager     1                   3dc34dd0ac831       kube-controller-manager-no-preload-042753
	
	
	==> coredns [5c3068fcc8ac406d25184b56548159ca9fa994e40d728e7ca23e59518921da2f] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:34493 - 64162 "HINFO IN 5847359376651155573.6850278063508237149. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012952257s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> describe nodes <==
	Name:               no-preload-042753
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-042753
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=no-preload-042753
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_21_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:21:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-042753
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:23:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:22:40 +0000   Fri, 19 Sep 2025 23:21:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:22:40 +0000   Fri, 19 Sep 2025 23:21:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:22:40 +0000   Fri, 19 Sep 2025 23:21:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:22:40 +0000   Fri, 19 Sep 2025 23:21:28 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    no-preload-042753
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 014336e97c2444b2adabeb7e22dc8208
	  System UUID:                4988ced6-3606-4eae-9dae-2b8a811e936b
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         100s
	  kube-system                 coredns-66bc5c9577-5jl4c                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     118s
	  kube-system                 etcd-no-preload-042753                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-fzdsg                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      119s
	  kube-system                 kube-apiserver-no-preload-042753              250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-no-preload-042753     200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-bgkfm                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         119s
	  kube-system                 kube-scheduler-no-preload-042753              100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 metrics-server-746fcd58dc-p99mj               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         91s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         117s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-86q9r    0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-hdlqb         0 (0%)        0 (0%)      0 (0%)           0 (0%)         58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 116s               kube-proxy       
	  Normal  Starting                 61s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m4s               kubelet          Node no-preload-042753 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s               kubelet          Node no-preload-042753 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s               kubelet          Node no-preload-042753 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           119s               node-controller  Node no-preload-042753 event: Registered Node no-preload-042753 in Controller
	  Normal  NodeReady                104s               kubelet          Node no-preload-042753 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  65s (x8 over 65s)  kubelet          Node no-preload-042753 status is now: NodeHasSufficientMemory
	  Normal  Starting                 65s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    65s (x8 over 65s)  kubelet          Node no-preload-042753 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     65s (x8 over 65s)  kubelet          Node no-preload-042753 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           58s                node-controller  Node no-preload-042753 event: Registered Node no-preload-042753 in Controller
	  Normal  Starting                 4s                 kubelet          Starting kubelet.
	  Normal  Starting                 4s                 kubelet          Starting kubelet.
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  Starting                 1s                 kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 23:21] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +2.000740] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.000000] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999317] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.501476] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.499982] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999149] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.001177] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.997827] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.502489] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.499017] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999122] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.003267] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.996866] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.503800] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	
	
	==> etcd [bc12ec0b221899bf739aaf4847c036d4b6534e98dda80f575bb37822c45a1235] <==
	{"level":"warn","ts":"2025-09-19T23:22:09.321514Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43236","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:22:09.328199Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43260","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:22:09.335504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:22:09.347929Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:22:09.354469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:22:09.361644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43332","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-09-19T23:22:32.208295Z","caller":"traceutil/trace.go:172","msg":"trace[1566850666] linearizableReadLoop","detail":"{readStateIndex:686; appliedIndex:686; }","duration":"128.500232ms","start":"2025-09-19T23:22:32.079771Z","end":"2025-09-19T23:22:32.208272Z","steps":["trace[1566850666] 'read index received'  (duration: 128.486635ms)","trace[1566850666] 'applied index is now lower than readState.Index'  (duration: 8.045µs)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:22:32.337492Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"257.697346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-042753\" limit:1 ","response":"range_response_count:1 size:4830"}
	{"level":"info","ts":"2025-09-19T23:22:32.337582Z","caller":"traceutil/trace.go:172","msg":"trace[1203492330] range","detail":"{range_begin:/registry/minions/no-preload-042753; range_end:; response_count:1; response_revision:653; }","duration":"257.804428ms","start":"2025-09-19T23:22:32.079761Z","end":"2025-09-19T23:22:32.337565Z","steps":["trace[1203492330] 'agreement among raft nodes before linearized reading'  (duration: 128.614764ms)","trace[1203492330] 'range keys from in-memory index tree'  (duration: 128.967573ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:22:32.338041Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.182697ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722595826562871618 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.85.2\" mod_revision:635 > success:<request_put:<key:\"/registry/masterleases/192.168.85.2\" value_size:65 lease:499223789708095807 >> failure:<request_range:<key:\"/registry/masterleases/192.168.85.2\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:22:32.338173Z","caller":"traceutil/trace.go:172","msg":"trace[1727908659] transaction","detail":"{read_only:false; response_revision:654; number_of_response:1; }","duration":"260.200974ms","start":"2025-09-19T23:22:32.077958Z","end":"2025-09-19T23:22:32.338159Z","steps":["trace[1727908659] 'process raft request'  (duration: 130.347519ms)","trace[1727908659] 'compare'  (duration: 129.087586ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:22:32.599466Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"183.005229ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:22:32.599550Z","caller":"traceutil/trace.go:172","msg":"trace[1977638458] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:654; }","duration":"183.099907ms","start":"2025-09-19T23:22:32.416429Z","end":"2025-09-19T23:22:32.599529Z","steps":["trace[1977638458] 'agreement among raft nodes before linearized reading'  (duration: 53.33211ms)","trace[1977638458] 'range keys from in-memory index tree'  (duration: 129.639299ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:22:32.599704Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.837999ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722595826562871624 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/coredns-66bc5c9577-5jl4c.1866d27b79399447\" mod_revision:636 > success:<request_put:<key:\"/registry/events/kube-system/coredns-66bc5c9577-5jl4c.1866d27b79399447\" value_size:692 lease:499223789708095111 >> failure:<request_range:<key:\"/registry/events/kube-system/coredns-66bc5c9577-5jl4c.1866d27b79399447\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:22:32.599810Z","caller":"traceutil/trace.go:172","msg":"trace[1409306081] transaction","detail":"{read_only:false; response_revision:655; number_of_response:1; }","duration":"223.160756ms","start":"2025-09-19T23:22:32.376626Z","end":"2025-09-19T23:22:32.599787Z","steps":["trace[1409306081] 'process raft request'  (duration: 93.172494ms)","trace[1409306081] 'compare'  (duration: 129.733541ms)"],"step_count":2}
	{"level":"warn","ts":"2025-09-19T23:22:50.574256Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.295782ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:22:50.574345Z","caller":"traceutil/trace.go:172","msg":"trace[1519693429] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:676; }","duration":"157.397477ms","start":"2025-09-19T23:22:50.416931Z","end":"2025-09-19T23:22:50.574329Z","steps":["trace[1519693429] 'agreement among raft nodes before linearized reading'  (duration: 79.868703ms)","trace[1519693429] 'range keys from in-memory index tree'  (duration: 77.378437ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:22:50.574426Z","caller":"traceutil/trace.go:172","msg":"trace[1167440019] transaction","detail":"{read_only:false; response_revision:677; number_of_response:1; }","duration":"224.750681ms","start":"2025-09-19T23:22:50.349654Z","end":"2025-09-19T23:22:50.574405Z","steps":["trace[1167440019] 'process raft request'  (duration: 147.22068ms)","trace[1167440019] 'compare'  (duration: 77.369316ms)"],"step_count":2}
	{"level":"info","ts":"2025-09-19T23:22:50.574446Z","caller":"traceutil/trace.go:172","msg":"trace[122641576] transaction","detail":"{read_only:false; response_revision:678; number_of_response:1; }","duration":"107.541166ms","start":"2025-09-19T23:22:50.466894Z","end":"2025-09-19T23:22:50.574435Z","steps":["trace[122641576] 'process raft request'  (duration: 107.477523ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:22:50.940920Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"181.666925ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-09-19T23:22:50.941357Z","caller":"traceutil/trace.go:172","msg":"trace[546679557] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:680; }","duration":"182.119376ms","start":"2025-09-19T23:22:50.759216Z","end":"2025-09-19T23:22:50.941336Z","steps":["trace[546679557] 'range keys from in-memory index tree'  (duration: 181.617693ms)"],"step_count":1}
	{"level":"warn","ts":"2025-09-19T23:22:50.941409Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"134.918045ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722595826562871804 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-3nwtx3o6nyyxnhqgkcblt22eta\" mod_revision:665 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-3nwtx3o6nyyxnhqgkcblt22eta\" value_size:609 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-3nwtx3o6nyyxnhqgkcblt22eta\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-09-19T23:22:50.941480Z","caller":"traceutil/trace.go:172","msg":"trace[1836538646] transaction","detail":"{read_only:false; response_revision:681; number_of_response:1; }","duration":"260.581607ms","start":"2025-09-19T23:22:50.680885Z","end":"2025-09-19T23:22:50.941467Z","steps":["trace[1836538646] 'process raft request'  (duration: 125.535727ms)","trace[1836538646] 'compare'  (duration: 134.507865ms)"],"step_count":2}
	2025/09/19 23:23:08 WARNING: [core] [Server #6]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	2025/09/19 23:23:08 WARNING: [core] [Server #6]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 23:23:12 up  2:05,  0 users,  load average: 3.22, 2.85, 1.88
	Linux no-preload-042753 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7dcf5220a7c8124462854575c668dbe751600ba3788e0b299366cc89fc5c6e48] <==
	I0919 23:22:11.092475       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I0919 23:22:11.092703       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:22:11.092728       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:22:11.092753       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:22:11Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:22:11.295657       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:22:11.295707       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:22:11.295721       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:22:11.296438       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:22:11.691880       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:22:11.691919       1 metrics.go:72] Registering metrics
	I0919 23:22:11.691995       1 controller.go:711] "Syncing nftables rules"
	I0919 23:22:21.296283       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:22:21.296343       1 main.go:301] handling current node
	I0919 23:22:31.296233       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:22:31.296269       1 main.go:301] handling current node
	I0919 23:22:41.296307       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:22:41.296355       1 main.go:301] handling current node
	I0919 23:22:51.296328       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:22:51.296379       1 main.go:301] handling current node
	I0919 23:23:01.303704       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:23:01.303749       1 main.go:301] handling current node
	I0919 23:23:11.295707       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I0919 23:23:11.295758       1 main.go:301] handling current node
	
	
	==> kube-apiserver [889cfcbec62741aad18885cb57ecac40e49d488bdf845746042edfccc8daa851] <==
	E0919 23:23:08.036370       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.036629       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.036429       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 105.123µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 23:23:08.037767       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.037810       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.037819       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.038921       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:08.039049       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.68348ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	E0919 23:23:08.039064       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.941637ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/no-preload-042753" result=null
	I0919 23:23:09.541242       1 stats.go:136] "Error getting keys" err="empty key: \"\""
	W0919 23:23:10.902075       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:23:10.902233       1 controller.go:102] "Unhandled Error" err=<
		loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0919 23:23:10.902248       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0919 23:23:10.903211       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:23:10.903255       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:23:10.903271       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	{"level":"warn","ts":"2025-09-19T23:23:11.054858Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0016503c0/127.0.0.1:2379","method":"/etcdserverpb.Lease/LeaseGrant","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 23:23:11.055083       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:11.055329       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 16.914µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 23:23:11.055443       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:11.056735       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:23:11.057422       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.608341ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	
	
	==> kube-controller-manager [11a93459f40ee7ac60c8b016660d54641b26f897a1f1e9f042ddb9290811062f] <==
	I0919 23:22:14.181083       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:22:14.181397       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0919 23:22:14.183038       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I0919 23:22:14.185282       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I0919 23:22:14.207770       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I0919 23:22:14.210092       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I0919 23:22:14.214367       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I0919 23:22:14.215766       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I0919 23:22:14.228537       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I0919 23:22:14.229741       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 23:22:14.229768       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I0919 23:22:14.229935       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I0919 23:22:14.230042       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I0919 23:22:14.230097       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I0919 23:22:14.230202       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 23:22:14.230293       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I0919 23:22:14.230573       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I0919 23:22:14.237942       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:22:14.247198       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:22:14.251358       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I0919 23:22:14.253592       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:22:14.253731       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:22:14.740735       1 endpointslice_controller.go:344] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kubernetes-dashboard/kubernetes-dashboard" err="EndpointSlice informer cache is out of date"
	E0919 23:22:44.243639       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:22:44.261965       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [c57f08ebb2421ca234dd183d49e85973ee1e5dae991c13b067b0b303f8250382] <==
	I0919 23:22:10.915352       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:22:10.967404       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:22:11.068251       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:22:11.068281       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E0919 23:22:11.068371       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:22:11.091924       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:22:11.092012       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:22:11.098709       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:22:11.099556       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:22:11.099582       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:22:11.101367       1 config.go:200] "Starting service config controller"
	I0919 23:22:11.101684       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:22:11.101769       1 config.go:309] "Starting node config controller"
	I0919 23:22:11.101791       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:22:11.102510       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:22:11.102043       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:22:11.102609       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:22:11.102021       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:22:11.102641       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:22:11.202347       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:22:11.203586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I0919 23:22:11.203600       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [ce95677f2eeb4f804753d5cd028d512a58256be3cfc52031593fea5a8cde0340] <==
	I0919 23:22:08.414937       1 serving.go:386] Generated self-signed cert in-memory
	W0919 23:22:09.837040       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 23:22:09.837084       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 23:22:09.837096       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 23:22:09.837128       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 23:22:09.862181       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:22:09.862208       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:22:09.865416       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:22:09.865647       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:22:09.866181       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:22:09.865682       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:22:09.966349       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: E0919 23:23:12.772615    3168 file_linux.go:61] "Unable to read config path" err="unable to create inotify: too many open files" path="/etc/kubernetes/manifests"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.773723    3168 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="cri-o" version="1.24.6" apiVersion="v1"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.774493    3168 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.774534    3168 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: E0919 23:23:12.774586    3168 plugins.go:580] "Error initializing dynamic plugin prober" err="error initializing watcher: too many open files"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.775462    3168 server.go:1262] "Started kubelet"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.775599    3168 server.go:180] "Starting to listen" address="0.0.0.0" port=10250
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.775693    3168 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.775853    3168 server_v1.go:49] "podresources" method="list" useActivePods=true
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.776136    3168 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.778188    3168 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.779127    3168 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: E0919 23:23:12.779185    3168 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.779477    3168 server.go:310] "Adding debug handlers to kubelet server"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.780376    3168 volume_manager.go:313] "Starting Kubelet Volume Manager"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: E0919 23:23:12.780483    3168 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"no-preload-042753\" not found"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.780744    3168 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.782729    3168 reconciler.go:29] "Reconciler: start to sync state"
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.783916    3168 factory.go:223] Registration of the systemd container factory successfully
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.791014    3168 factory.go:223] Registration of the crio container factory successfully
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: I0919 23:23:12.791501    3168 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: E0919 23:23:12.791532    3168 manager.go:294] Registration of the raw container factory failed: inotify_init: too many open files
	Sep 19 23:23:12 no-preload-042753 kubelet[3168]: E0919 23:23:12.791552    3168 kubelet.go:1686] "Failed to start cAdvisor" err="inotify_init: too many open files"
	Sep 19 23:23:12 no-preload-042753 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 19 23:23:12 no-preload-042753 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	
	==> kubernetes-dashboard [8e447cc9d3b20c0b9fadcfe22402d8f0da4f460c388920cd360017de22c1d27c] <==
	2025/09/19 23:22:18 Using namespace: kubernetes-dashboard
	2025/09/19 23:22:18 Using in-cluster config to connect to apiserver
	2025/09/19 23:22:18 Using secret token for csrf signing
	2025/09/19 23:22:18 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:22:18 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:22:18 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 23:22:18 Generating JWE encryption key
	2025/09/19 23:22:18 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:22:18 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:22:18 Initializing JWE encryption key from synchronized object
	2025/09/19 23:22:18 Creating in-cluster Sidecar client
	2025/09/19 23:22:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:22:18 Serving insecurely on HTTP port: 9090
	2025/09/19 23:22:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:22:18 Starting overwatch
	
	
	==> storage-provisioner [33d900fc07a782fe88c632462b36212f394ee86f9b5d46ab2f94d590849b5276] <==
	W0919 23:22:41.572647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:45.029028       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:49.289053       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:52.887484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:55.941276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:58.964729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:58.973635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:22:58.973819       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 23:22:58.973912       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"05ef580f-ffae-4b3d-9189-3655be70accb", APIVersion:"v1", ResourceVersion:"689", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-042753_5855eaec-4cf1-4c5f-bc72-0e44bbef2a1d became leader
	I0919 23:22:58.973998       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-042753_5855eaec-4cf1-4c5f-bc72-0e44bbef2a1d!
	W0919 23:22:58.977677       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:22:58.994562       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:22:59.075237       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-042753_5855eaec-4cf1-4c5f-bc72-0e44bbef2a1d!
	W0919 23:23:00.997996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:01.001665       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:03.005008       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:03.010513       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:05.013306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:05.017613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:07.737535       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:07.742504       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:09.746358       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:09.751320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:11.784892       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:23:11.793231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [f4ddbac3a81f0ed31d22f6b78a0824a09338a27187fc4ba2b7e7957cdcae6f30] <==
	I0919 23:22:10.863872       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:22:40.866771       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-042753 -n no-preload-042753
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-042753 -n no-preload-042753: exit status 2 (315.64761ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-042753 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-p99mj
helpers_test.go:282: ======> post-mortem[TestStartStop/group/no-preload/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context no-preload-042753 describe pod metrics-server-746fcd58dc-p99mj
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context no-preload-042753 describe pod metrics-server-746fcd58dc-p99mj: exit status 1 (69.769811ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-p99mj" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context no-preload-042753 describe pod metrics-server-746fcd58dc-p99mj: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/Pause (7.79s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-756077 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756077 -n embed-certs-756077
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756077 -n embed-certs-756077: exit status 2 (327.035ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-756077 -n embed-certs-756077
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-756077 -n embed-certs-756077: exit status 2 (359.811921ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-756077 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756077 -n embed-certs-756077
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756077 -n embed-certs-756077: exit status 2 (407.798268ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-756077 -n embed-certs-756077
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-756077 -n embed-certs-756077: exit status 2 (370.632202ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-756077
helpers_test.go:243: (dbg) docker inspect embed-certs-756077:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7",
	        "Created": "2025-09-19T23:22:33.244479146Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304188,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:24:10.159311961Z",
	            "FinishedAt": "2025-09-19T23:24:09.172092572Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7/hosts",
	        "LogPath": "/var/lib/docker/containers/d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7/d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7-json.log",
	        "Name": "/embed-certs-756077",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-756077:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-756077",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7",
	                "LowerDir": "/var/lib/docker/overlay2/adc2aea0ff6318d47f04da7f67df9bdec1dfc3f6dec5d18c7ad7ffc5d0ec974b-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/adc2aea0ff6318d47f04da7f67df9bdec1dfc3f6dec5d18c7ad7ffc5d0ec974b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/adc2aea0ff6318d47f04da7f67df9bdec1dfc3f6dec5d18c7ad7ffc5d0ec974b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/adc2aea0ff6318d47f04da7f67df9bdec1dfc3f6dec5d18c7ad7ffc5d0ec974b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-756077",
	                "Source": "/var/lib/docker/volumes/embed-certs-756077/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-756077",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-756077",
	                "name.minikube.sigs.k8s.io": "embed-certs-756077",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df6252335d681e947a55f27b707783c4ba4815ae889d236371c4a40f1c6dadb4",
	            "SandboxKey": "/var/run/docker/netns/df6252335d68",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-756077": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:67:ef:09:0b:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e7ff6da14c68ba8ea1175fd14863904b7ad0b6597f22825ec68236b0665d3cb",
	                    "EndpointID": "fe2bee87bd1d31a7d45967d52f408ccc6ebf40d890032e0e3f2dafff1b1b7280",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-756077",
	                        "d5747027b27d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756077 -n embed-certs-756077
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756077 -n embed-certs-756077: exit status 2 (388.946558ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-756077 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-756077 logs -n 25: (1.567388458s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ unpause │ -p no-preload-042753 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ delete  │ -p no-preload-042753                                                                                                                                                                                                                          │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ delete  │ -p no-preload-042753                                                                                                                                                                                                                          │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ start   │ -p newest-cni-734532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-734532 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ stop    │ -p newest-cni-734532 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ addons  │ enable dashboard -p newest-cni-734532 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ start   │ -p newest-cni-734532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-756077 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ stop    │ -p embed-certs-756077 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:24 UTC │
	│ image   │ newest-cni-734532 image list --format=json                                                                                                                                                                                                    │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ pause   │ -p newest-cni-734532 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ unpause │ -p newest-cni-734532 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:24 UTC │
	│ delete  │ -p newest-cni-734532                                                                                                                                                                                                                          │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ delete  │ -p newest-cni-734532                                                                                                                                                                                                                          │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ start   │ -p auto-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-781969                  │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-523696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-523696 │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ stop    │ -p default-k8s-diff-port-523696 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-523696 │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ addons  │ enable dashboard -p embed-certs-756077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ start   │ -p embed-certs-756077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-523696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-523696 │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ start   │ -p default-k8s-diff-port-523696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-523696 │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │                     │
	│ image   │ embed-certs-756077 image list --format=json                                                                                                                                                                                                   │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ pause   │ -p embed-certs-756077 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ unpause │ -p embed-certs-756077 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:24:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:24:26.283657  309140 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:24:26.283815  309140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:24:26.283823  309140 out.go:374] Setting ErrFile to fd 2...
	I0919 23:24:26.283829  309140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:24:26.284132  309140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 23:24:26.284765  309140 out.go:368] Setting JSON to false
	I0919 23:24:26.286679  309140 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7616,"bootTime":1758316650,"procs":681,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:24:26.286825  309140 start.go:140] virtualization: kvm guest
	I0919 23:24:26.289274  309140 out.go:179] * [default-k8s-diff-port-523696] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:24:26.290737  309140 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:24:26.290736  309140 notify.go:220] Checking for updates...
	I0919 23:24:26.295556  309140 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:24:26.297180  309140 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:24:26.298865  309140 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 23:24:26.300298  309140 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:24:26.301687  309140 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:24:26.304857  309140 config.go:182] Loaded profile config "default-k8s-diff-port-523696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:24:26.305618  309140 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:24:26.338118  309140 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:24:26.338297  309140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:24:26.421312  309140 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:24:26.407742769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:24:26.421477  309140 docker.go:318] overlay module found
	I0919 23:24:26.424827  309140 out.go:179] * Using the docker driver based on existing profile
	I0919 23:24:26.426187  309140 start.go:304] selected driver: docker
	I0919 23:24:26.426204  309140 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:24:26.426288  309140 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:24:26.427008  309140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:24:26.505958  309140 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:24:26.488620346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:24:26.506591  309140 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:24:26.506637  309140 cni.go:84] Creating CNI manager for ""
	I0919 23:24:26.506721  309140 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:24:26.506790  309140 start.go:348] cluster config:
	{Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:24:26.511950  309140 out.go:179] * Starting "default-k8s-diff-port-523696" primary control-plane node in "default-k8s-diff-port-523696" cluster
	I0919 23:24:26.513789  309140 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 23:24:26.515490  309140 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:24:26.517264  309140 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:24:26.517329  309140 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 23:24:26.517343  309140 cache.go:58] Caching tarball of preloaded images
	I0919 23:24:26.517377  309140 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:24:26.517443  309140 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 23:24:26.517458  309140 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 23:24:26.517589  309140 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/config.json ...
	I0919 23:24:26.543420  309140 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:24:26.543441  309140 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:24:26.543462  309140 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:24:26.543491  309140 start.go:360] acquireMachinesLock for default-k8s-diff-port-523696: {Name:mk3e8cf47fc7b3222021a2ea03ba5708af5f316a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:24:26.543572  309140 start.go:364] duration metric: took 48.565µs to acquireMachinesLock for "default-k8s-diff-port-523696"
	I0919 23:24:26.543596  309140 start.go:96] Skipping create...Using existing machine configuration
	I0919 23:24:26.543606  309140 fix.go:54] fixHost starting: 
	I0919 23:24:26.543824  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:26.564538  309140 fix.go:112] recreateIfNeeded on default-k8s-diff-port-523696: state=Stopped err=<nil>
	W0919 23:24:26.564567  309140 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 23:24:26.037631  302093 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 23:24:26.042904  302093 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:24:26.042933  302093 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 23:24:26.067470  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:24:26.343569  302093 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:24:26.343643  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:26.343679  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-781969 minikube.k8s.io/updated_at=2025_09_19T23_24_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=auto-781969 minikube.k8s.io/primary=true
	I0919 23:24:26.353776  302093 ops.go:34] apiserver oom_adj: -16
	I0919 23:24:26.468611  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:26.969359  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:27.468754  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:27.969245  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:28.468965  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W0919 23:24:26.737927  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:29.236370  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	I0919 23:24:28.969587  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:29.468825  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:29.969314  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:30.469035  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:30.557900  302093 kubeadm.go:1105] duration metric: took 4.214314591s to wait for elevateKubeSystemPrivileges
	I0919 23:24:30.557940  302093 kubeadm.go:394] duration metric: took 15.744021415s to StartCluster
	I0919 23:24:30.557961  302093 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:30.558072  302093 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:24:30.560227  302093 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:30.560534  302093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:24:30.560543  302093 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:24:30.560657  302093 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:24:30.560739  302093 addons.go:69] Setting storage-provisioner=true in profile "auto-781969"
	I0919 23:24:30.560757  302093 addons.go:238] Setting addon storage-provisioner=true in "auto-781969"
	I0919 23:24:30.560783  302093 host.go:66] Checking if "auto-781969" exists ...
	I0919 23:24:30.560798  302093 config.go:182] Loaded profile config "auto-781969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:24:30.560935  302093 addons.go:69] Setting default-storageclass=true in profile "auto-781969"
	I0919 23:24:30.560950  302093 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-781969"
	I0919 23:24:30.561276  302093 cli_runner.go:164] Run: docker container inspect auto-781969 --format={{.State.Status}}
	I0919 23:24:30.561311  302093 cli_runner.go:164] Run: docker container inspect auto-781969 --format={{.State.Status}}
	I0919 23:24:30.564238  302093 out.go:179] * Verifying Kubernetes components...
	I0919 23:24:30.565938  302093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:24:30.587926  302093 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:24:26.904176  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:24:26.904556  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:24:26.904610  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:24:26.904659  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:24:26.955554  257816 cri.go:89] found id: "5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:26.955575  257816 cri.go:89] found id: ""
	I0919 23:24:26.955584  257816 logs.go:282] 1 containers: [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4]
	I0919 23:24:26.955643  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:26.960635  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:24:26.960713  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:24:27.010249  257816 cri.go:89] found id: ""
	I0919 23:24:27.010280  257816 logs.go:282] 0 containers: []
	W0919 23:24:27.010289  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:24:27.010297  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:24:27.010353  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:24:27.071442  257816 cri.go:89] found id: ""
	I0919 23:24:27.071470  257816 logs.go:282] 0 containers: []
	W0919 23:24:27.071482  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:24:27.071489  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:24:27.071558  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:24:27.132397  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:27.132470  257816 cri.go:89] found id: ""
	I0919 23:24:27.132485  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:24:27.132538  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:27.137314  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:24:27.137390  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:24:27.206185  257816 cri.go:89] found id: ""
	I0919 23:24:27.206216  257816 logs.go:282] 0 containers: []
	W0919 23:24:27.206228  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:24:27.206235  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:24:27.206291  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:24:27.249808  257816 cri.go:89] found id: "7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:27.249832  257816 cri.go:89] found id: ""
	I0919 23:24:27.249841  257816 logs.go:282] 1 containers: [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a]
	I0919 23:24:27.249907  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:27.255500  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:24:27.255568  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:24:27.297727  257816 cri.go:89] found id: ""
	I0919 23:24:27.297755  257816 logs.go:282] 0 containers: []
	W0919 23:24:27.297763  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:24:27.297769  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:24:27.297822  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:24:27.336933  257816 cri.go:89] found id: ""
	I0919 23:24:27.336966  257816 logs.go:282] 0 containers: []
	W0919 23:24:27.336976  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:24:27.336987  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:24:27.336998  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:24:27.390200  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:24:27.390234  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:24:27.434021  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:24:27.434049  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:24:27.555056  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:24:27.555095  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:24:27.574218  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:24:27.574248  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:24:27.646492  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:24:27.646519  257816 logs.go:123] Gathering logs for kube-apiserver [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4] ...
	I0919 23:24:27.646536  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:27.702869  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:24:27.702903  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:27.795346  257816 logs.go:123] Gathering logs for kube-controller-manager [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a] ...
	I0919 23:24:27.795440  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:30.338192  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:24:30.338777  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:24:30.338841  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:24:30.338909  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:24:30.382908  257816 cri.go:89] found id: "5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:30.382935  257816 cri.go:89] found id: ""
	I0919 23:24:30.382944  257816 logs.go:282] 1 containers: [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4]
	I0919 23:24:30.383005  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:30.388474  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:24:30.388560  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:24:30.435791  257816 cri.go:89] found id: ""
	I0919 23:24:30.435817  257816 logs.go:282] 0 containers: []
	W0919 23:24:30.435827  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:24:30.435834  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:24:30.435890  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:24:30.475322  257816 cri.go:89] found id: ""
	I0919 23:24:30.475352  257816 logs.go:282] 0 containers: []
	W0919 23:24:30.475384  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:24:30.475392  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:24:30.475457  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:24:30.528793  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:30.528817  257816 cri.go:89] found id: ""
	I0919 23:24:30.528825  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:24:30.528876  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:30.533808  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:24:30.533888  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:24:30.582002  257816 cri.go:89] found id: ""
	I0919 23:24:30.582044  257816 logs.go:282] 0 containers: []
	W0919 23:24:30.582055  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:24:30.582063  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:24:30.582161  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:24:30.642550  257816 cri.go:89] found id: "7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:30.642572  257816 cri.go:89] found id: ""
	I0919 23:24:30.642580  257816 logs.go:282] 1 containers: [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a]
	I0919 23:24:30.642622  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:30.646953  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:24:30.647029  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:24:30.701494  257816 cri.go:89] found id: ""
	I0919 23:24:30.701543  257816 logs.go:282] 0 containers: []
	W0919 23:24:30.701558  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:24:30.701565  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:24:30.701649  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:24:30.764368  257816 cri.go:89] found id: ""
	I0919 23:24:30.764462  257816 logs.go:282] 0 containers: []
	W0919 23:24:30.764486  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:24:30.764498  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:24:30.764513  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:24:30.792998  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:24:30.793048  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:24:30.893009  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:24:30.893031  257816 logs.go:123] Gathering logs for kube-apiserver [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4] ...
	I0919 23:24:30.893046  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:30.961638  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:24:30.961678  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:30.588985  302093 addons.go:238] Setting addon default-storageclass=true in "auto-781969"
	I0919 23:24:30.589035  302093 host.go:66] Checking if "auto-781969" exists ...
	I0919 23:24:30.589527  302093 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:24:30.589544  302093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:24:30.589595  302093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-781969
	I0919 23:24:30.589784  302093 cli_runner.go:164] Run: docker container inspect auto-781969 --format={{.State.Status}}
	I0919 23:24:30.623768  302093 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:24:30.623861  302093 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:24:30.624016  302093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-781969
	I0919 23:24:30.625118  302093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/auto-781969/id_rsa Username:docker}
	I0919 23:24:30.649823  302093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/auto-781969/id_rsa Username:docker}
	I0919 23:24:30.675726  302093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:24:30.717894  302093 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:24:30.779713  302093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:24:30.779764  302093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:24:30.942091  302093 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0919 23:24:30.943209  302093 node_ready.go:35] waiting up to 15m0s for node "auto-781969" to be "Ready" ...
	I0919 23:24:31.163669  302093 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:24:26.566548  309140 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-523696" ...
	I0919 23:24:26.566615  309140 cli_runner.go:164] Run: docker start default-k8s-diff-port-523696
	I0919 23:24:26.870720  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:26.893342  309140 kic.go:430] container "default-k8s-diff-port-523696" state is running.
	I0919 23:24:26.894016  309140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:24:26.924729  309140 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/config.json ...
	I0919 23:24:26.925132  309140 machine.go:93] provisionDockerMachine start ...
	I0919 23:24:26.925209  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:26.948711  309140 main.go:141] libmachine: Using SSH client type: native
	I0919 23:24:26.949057  309140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:24:26.949077  309140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:24:26.949781  309140 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52750->127.0.0.1:33109: read: connection reset by peer
	I0919 23:24:30.092067  309140 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523696
	
	I0919 23:24:30.092120  309140 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-523696"
	I0919 23:24:30.092185  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:30.112640  309140 main.go:141] libmachine: Using SSH client type: native
	I0919 23:24:30.112936  309140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:24:30.112953  309140 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-523696 && echo "default-k8s-diff-port-523696" | sudo tee /etc/hostname
	I0919 23:24:30.273791  309140 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523696
	
	I0919 23:24:30.273872  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:30.292713  309140 main.go:141] libmachine: Using SSH client type: native
	I0919 23:24:30.292924  309140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:24:30.292946  309140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-523696' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-523696/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-523696' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:24:30.434051  309140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:24:30.434082  309140 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 23:24:30.434121  309140 ubuntu.go:190] setting up certificates
	I0919 23:24:30.434133  309140 provision.go:84] configureAuth start
	I0919 23:24:30.434186  309140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:24:30.453901  309140 provision.go:143] copyHostCerts
	I0919 23:24:30.453969  309140 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 23:24:30.453987  309140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 23:24:30.454091  309140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 23:24:30.454257  309140 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 23:24:30.454272  309140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 23:24:30.454317  309140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 23:24:30.454445  309140 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 23:24:30.454458  309140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 23:24:30.454497  309140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 23:24:30.454593  309140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-523696 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-523696 localhost minikube]
	I0919 23:24:31.411856  309140 provision.go:177] copyRemoteCerts
	I0919 23:24:31.411911  309140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:24:31.411952  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:31.430897  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:31.531843  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:24:31.558712  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0919 23:24:31.586368  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:24:31.612772  309140 provision.go:87] duration metric: took 1.178628147s to configureAuth
	I0919 23:24:31.612797  309140 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:24:31.612973  309140 config.go:182] Loaded profile config "default-k8s-diff-port-523696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:24:31.613078  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:31.632012  309140 main.go:141] libmachine: Using SSH client type: native
	I0919 23:24:31.632249  309140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:24:31.632267  309140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 23:24:31.935858  309140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 23:24:31.935887  309140 machine.go:96] duration metric: took 5.010735102s to provisionDockerMachine
	I0919 23:24:31.935899  309140 start.go:293] postStartSetup for "default-k8s-diff-port-523696" (driver="docker")
	I0919 23:24:31.935912  309140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:24:31.935968  309140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:24:31.936005  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:31.956315  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:32.056792  309140 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:24:32.061192  309140 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:24:32.061236  309140 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:24:32.061246  309140 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:24:32.061253  309140 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:24:32.061269  309140 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 23:24:32.061351  309140 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 23:24:32.061458  309140 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 23:24:32.061588  309140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:24:32.072274  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 23:24:32.099675  309140 start.go:296] duration metric: took 163.760515ms for postStartSetup
	I0919 23:24:32.099759  309140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:24:32.099799  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:32.120088  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:32.213560  309140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:24:32.218295  309140 fix.go:56] duration metric: took 5.67468432s for fixHost
	I0919 23:24:32.218318  309140 start.go:83] releasing machines lock for "default-k8s-diff-port-523696", held for 5.674733278s
	I0919 23:24:32.218384  309140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:24:32.238458  309140 ssh_runner.go:195] Run: cat /version.json
	I0919 23:24:32.238503  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:32.238533  309140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:24:32.238607  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:32.259048  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:32.259318  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:32.430569  309140 ssh_runner.go:195] Run: systemctl --version
	I0919 23:24:32.435992  309140 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 23:24:32.579156  309140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:24:32.584528  309140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:24:32.595184  309140 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:24:32.595264  309140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:24:32.605551  309140 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 23:24:32.605574  309140 start.go:495] detecting cgroup driver to use...
	I0919 23:24:32.605604  309140 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:24:32.605650  309140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:24:32.619391  309140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:24:32.633226  309140 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:24:32.633293  309140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:24:32.647738  309140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:24:32.661157  309140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:24:32.727607  309140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:24:32.794064  309140 docker.go:234] disabling docker service ...
	I0919 23:24:32.794165  309140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:24:32.807868  309140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:24:32.821190  309140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:24:32.887281  309140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:24:32.951652  309140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:24:32.964191  309140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:24:32.981546  309140 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 23:24:32.981600  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:32.992970  309140 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 23:24:32.993034  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.004011  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.014603  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.025408  309140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:24:33.035799  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.047602  309140 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.058615  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.069264  309140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:24:33.078304  309140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:24:33.087611  309140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:24:33.153840  309140 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 23:24:33.713412  309140 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 23:24:33.713477  309140 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 23:24:33.717761  309140 start.go:563] Will wait 60s for crictl version
	I0919 23:24:33.717833  309140 ssh_runner.go:195] Run: which crictl
	I0919 23:24:33.721427  309140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:24:33.762284  309140 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 23:24:33.762388  309140 ssh_runner.go:195] Run: crio --version
	I0919 23:24:33.803575  309140 ssh_runner.go:195] Run: crio --version
	I0919 23:24:33.848872  309140 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 23:24:31.164849  302093 addons.go:514] duration metric: took 604.189926ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:24:31.446770  302093 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-781969" context rescaled to 1 replicas
	W0919 23:24:32.947496  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:31.735630  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:33.735738  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	I0919 23:24:33.850269  309140 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:24:33.872231  309140 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0919 23:24:33.876658  309140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:24:33.890465  309140 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:24:33.890565  309140 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:24:33.890611  309140 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:24:33.937949  309140 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 23:24:33.937970  309140 crio.go:433] Images already preloaded, skipping extraction
	I0919 23:24:33.938010  309140 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:24:33.977639  309140 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 23:24:33.977676  309140 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:24:33.977687  309140 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 crio true true} ...
	I0919 23:24:33.977802  309140 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-523696 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:24:33.977887  309140 ssh_runner.go:195] Run: crio config
	I0919 23:24:34.031365  309140 cni.go:84] Creating CNI manager for ""
	I0919 23:24:34.031392  309140 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:24:34.031403  309140 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:24:34.031428  309140 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-523696 NodeName:default-k8s-diff-port-523696 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:24:34.031594  309140 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-523696"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:24:34.031661  309140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:24:34.042187  309140 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:24:34.042257  309140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:24:34.054318  309140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0919 23:24:34.075609  309140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:24:34.097647  309140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0919 23:24:34.120339  309140 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:24:34.124618  309140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:24:34.139295  309140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:24:34.212038  309140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:24:34.232186  309140 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696 for IP: 192.168.76.2
	I0919 23:24:34.232210  309140 certs.go:194] generating shared ca certs ...
	I0919 23:24:34.232230  309140 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:34.232372  309140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 23:24:34.232412  309140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 23:24:34.232423  309140 certs.go:256] generating profile certs ...
	I0919 23:24:34.232539  309140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.key
	I0919 23:24:34.232622  309140 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key.3ddce01e
	I0919 23:24:34.232672  309140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key
	I0919 23:24:34.232810  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 23:24:34.232834  309140 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 23:24:34.232841  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:24:34.232860  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:24:34.232878  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:24:34.232899  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 23:24:34.232950  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 23:24:34.233712  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:24:34.268742  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 23:24:34.300594  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:24:34.337147  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 23:24:34.373002  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 23:24:34.405402  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:24:34.434351  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:24:34.465027  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:24:34.491232  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 23:24:34.520175  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 23:24:34.546793  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:24:34.579206  309140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:24:34.600687  309140 ssh_runner.go:195] Run: openssl version
	I0919 23:24:34.606839  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:24:34.617629  309140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:24:34.621401  309140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:24:34.621464  309140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:24:34.628338  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:24:34.637814  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 23:24:34.648424  309140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 23:24:34.652983  309140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 23:24:34.653057  309140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 23:24:34.660990  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 23:24:34.670822  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 23:24:34.681542  309140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 23:24:34.685776  309140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 23:24:34.685838  309140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 23:24:34.692846  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:24:34.703123  309140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:24:34.707402  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 23:24:34.714339  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 23:24:34.721673  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 23:24:34.728622  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 23:24:34.735988  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 23:24:34.744110  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 23:24:34.754243  309140 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:24:34.754341  309140 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 23:24:34.754401  309140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:24:34.798434  309140 cri.go:89] found id: ""
	I0919 23:24:34.798547  309140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:24:34.810288  309140 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 23:24:34.810308  309140 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 23:24:34.810356  309140 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 23:24:34.820696  309140 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:24:34.821738  309140 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-523696" does not appear in /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:24:34.822397  309140 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14668/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-523696" cluster setting kubeconfig missing "default-k8s-diff-port-523696" context setting]
	I0919 23:24:34.823789  309140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:34.826318  309140 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 23:24:34.836440  309140 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0919 23:24:34.836479  309140 kubeadm.go:593] duration metric: took 26.164332ms to restartPrimaryControlPlane
	I0919 23:24:34.836489  309140 kubeadm.go:394] duration metric: took 82.255715ms to StartCluster
	I0919 23:24:34.836509  309140 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:34.836598  309140 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:24:34.838290  309140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:34.838505  309140 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:24:34.838571  309140 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:24:34.838669  309140 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-523696"
	I0919 23:24:34.838700  309140 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-523696"
	I0919 23:24:34.838697  309140 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-523696"
	W0919 23:24:34.838713  309140 addons.go:247] addon storage-provisioner should already be in state true
	I0919 23:24:34.838723  309140 config.go:182] Loaded profile config "default-k8s-diff-port-523696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:24:34.838737  309140 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-523696"
	I0919 23:24:34.838742  309140 host.go:66] Checking if "default-k8s-diff-port-523696" exists ...
	I0919 23:24:34.838737  309140 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-523696"
	I0919 23:24:34.838747  309140 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-523696"
	I0919 23:24:34.838779  309140 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-523696"
	W0919 23:24:34.838791  309140 addons.go:247] addon metrics-server should already be in state true
	I0919 23:24:34.838792  309140 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-523696"
	W0919 23:24:34.838802  309140 addons.go:247] addon dashboard should already be in state true
	I0919 23:24:34.838821  309140 host.go:66] Checking if "default-k8s-diff-port-523696" exists ...
	I0919 23:24:34.838843  309140 host.go:66] Checking if "default-k8s-diff-port-523696" exists ...
	I0919 23:24:34.839154  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:34.839285  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:34.839292  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:34.839314  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:34.840627  309140 out.go:179] * Verifying Kubernetes components...
	I0919 23:24:34.844063  309140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:24:34.867790  309140 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 23:24:34.869503  309140 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0919 23:24:34.869542  309140 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 23:24:34.872008  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:24:34.872032  309140 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:24:34.872135  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:34.872323  309140 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 23:24:34.872338  309140 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 23:24:34.872384  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:34.877981  309140 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-523696"
	W0919 23:24:34.878005  309140 addons.go:247] addon default-storageclass should already be in state true
	I0919 23:24:34.878084  309140 host.go:66] Checking if "default-k8s-diff-port-523696" exists ...
	I0919 23:24:34.878558  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:34.883277  309140 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:24:31.051809  257816 logs.go:123] Gathering logs for kube-controller-manager [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a] ...
	I0919 23:24:31.051845  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:31.093668  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:24:31.093705  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:24:31.153944  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:24:31.153989  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:24:31.200178  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:24:31.200206  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:24:33.798203  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:24:33.798646  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:24:33.798703  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:24:33.798750  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:24:33.839377  257816 cri.go:89] found id: "5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:33.839415  257816 cri.go:89] found id: ""
	I0919 23:24:33.839426  257816 logs.go:282] 1 containers: [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4]
	I0919 23:24:33.839490  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:33.844396  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:24:33.844548  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:24:33.886550  257816 cri.go:89] found id: ""
	I0919 23:24:33.886585  257816 logs.go:282] 0 containers: []
	W0919 23:24:33.886598  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:24:33.886610  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:24:33.886674  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:24:33.927064  257816 cri.go:89] found id: ""
	I0919 23:24:33.927093  257816 logs.go:282] 0 containers: []
	W0919 23:24:33.927121  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:24:33.927129  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:24:33.927175  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:24:33.967182  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:33.967208  257816 cri.go:89] found id: ""
	I0919 23:24:33.967217  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:24:33.967278  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:33.971763  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:24:33.971832  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:24:34.018065  257816 cri.go:89] found id: ""
	I0919 23:24:34.018096  257816 logs.go:282] 0 containers: []
	W0919 23:24:34.018120  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:24:34.018127  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:24:34.018187  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:24:34.058020  257816 cri.go:89] found id: "7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:34.058046  257816 cri.go:89] found id: ""
	I0919 23:24:34.058056  257816 logs.go:282] 1 containers: [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a]
	I0919 23:24:34.058138  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:34.061911  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:24:34.061974  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:24:34.101154  257816 cri.go:89] found id: ""
	I0919 23:24:34.101188  257816 logs.go:282] 0 containers: []
	W0919 23:24:34.101198  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:24:34.101206  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:24:34.101254  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:24:34.141152  257816 cri.go:89] found id: ""
	I0919 23:24:34.141178  257816 logs.go:282] 0 containers: []
	W0919 23:24:34.141190  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:24:34.141200  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:24:34.141214  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:34.214074  257816 logs.go:123] Gathering logs for kube-controller-manager [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a] ...
	I0919 23:24:34.214127  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:34.257490  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:24:34.257523  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:24:34.311837  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:24:34.311886  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:24:34.365038  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:24:34.365078  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:24:34.478214  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:24:34.478247  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:24:34.496233  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:24:34.496266  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:24:34.562196  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:24:34.562224  257816 logs.go:123] Gathering logs for kube-apiserver [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4] ...
	I0919 23:24:34.562241  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:34.884891  309140 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:24:34.884919  309140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:24:34.884982  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:34.905312  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:34.906376  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:34.909340  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:34.912013  309140 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:24:34.912034  309140 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:24:34.912095  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:34.933811  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:34.966480  309140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:24:35.010612  309140 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-523696" to be "Ready" ...
	I0919 23:24:35.037805  309140 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:24:35.037834  309140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:24:35.044786  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:24:35.048325  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:24:35.048349  309140 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:24:35.053928  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:24:35.069995  309140 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:24:35.070021  309140 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:24:35.084544  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:24:35.084571  309140 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:24:35.109235  309140 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:24:35.109262  309140 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 23:24:35.128547  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:24:35.128576  309140 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 23:24:35.139026  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:24:35.159339  309140 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:24:35.159386  309140 retry.go:31] will retry after 137.148012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:24:35.159866  309140 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:24:35.159893  309140 retry.go:31] will retry after 373.756504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:24:35.160185  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:24:35.160208  309140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 23:24:35.188143  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:24:35.188169  309140 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:24:35.212390  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:24:35.212417  309140 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 23:24:35.233318  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:24:35.233345  309140 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 23:24:35.254082  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:24:35.254125  309140 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:24:35.275869  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:24:35.275897  309140 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:24:35.295578  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:24:35.296856  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:24:35.533943  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:24:36.964123  309140 node_ready.go:49] node "default-k8s-diff-port-523696" is "Ready"
	I0919 23:24:36.964156  309140 node_ready.go:38] duration metric: took 1.953481907s for node "default-k8s-diff-port-523696" to be "Ready" ...
	I0919 23:24:36.964172  309140 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:24:36.964227  309140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:24:37.617243  309140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.478166995s)
	I0919 23:24:37.617287  309140 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-523696"
	I0919 23:24:37.617386  309140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.32176172s)
	I0919 23:24:37.619530  309140 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-523696 addons enable metrics-server
	
	I0919 23:24:37.635317  309140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.338423787s)
	I0919 23:24:37.635416  309140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.101439605s)
	I0919 23:24:37.635438  309140 api_server.go:72] duration metric: took 2.796905594s to wait for apiserver process to appear ...
	I0919 23:24:37.635452  309140 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:24:37.635471  309140 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0919 23:24:37.640152  309140 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:24:37.640179  309140 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:24:37.643658  309140 out.go:179] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	W0919 23:24:35.448264  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:37.947094  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:36.235993  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:38.734720  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	I0919 23:24:37.106633  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:24:37.107189  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:24:37.107301  257816 kubeadm.go:593] duration metric: took 4m5.072457044s to restartPrimaryControlPlane
	W0919 23:24:37.107369  257816 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0919 23:24:37.107399  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 23:24:37.830984  257816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:24:37.847638  257816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:24:37.860362  257816 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:24:37.860561  257816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:24:37.875827  257816 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:24:37.875851  257816 kubeadm.go:157] found existing configuration files:
	
	I0919 23:24:37.875901  257816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:24:37.887979  257816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:24:37.888047  257816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:24:37.901292  257816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:24:37.913648  257816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:24:37.913697  257816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:24:37.924817  257816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:24:37.935433  257816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:24:37.935507  257816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:24:37.946943  257816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:24:37.960332  257816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:24:37.960401  257816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:24:37.973866  257816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:24:38.042696  257816 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:24:38.113772  257816 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:24:37.645231  309140 addons.go:514] duration metric: took 2.806657655s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I0919 23:24:38.136257  309140 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0919 23:24:38.141468  309140 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:24:38.141497  309140 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:24:38.636017  309140 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0919 23:24:38.640415  309140 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0919 23:24:38.641501  309140 api_server.go:141] control plane version: v1.34.0
	I0919 23:24:38.641539  309140 api_server.go:131] duration metric: took 1.006078886s to wait for apiserver health ...
	I0919 23:24:38.641550  309140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:24:38.645492  309140 system_pods.go:59] 9 kube-system pods found
	I0919 23:24:38.645534  309140 system_pods.go:61] "coredns-66bc5c9577-zjjk2" [403d55a0-6e25-4177-9a59-c6ea5792f38e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:24:38.645545  309140 system_pods.go:61] "etcd-default-k8s-diff-port-523696" [66d51094-8ff7-4164-9c50-41bac13011c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:24:38.645558  309140 system_pods.go:61] "kindnet-fkhtz" [8d0ba255-999f-4997-971c-6f4501b5a3c3] Running
	I0919 23:24:38.645570  309140 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-523696" [f312d723-9344-4643-8baf-fe8c06960175] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:24:38.645579  309140 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-523696" [fc4102ec-4d70-4cd9-9296-9cf081d83722] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:24:38.645585  309140 system_pods.go:61] "kube-proxy-wfzfz" [f616d499-194a-4158-b1f6-c5850de50d2c] Running
	I0919 23:24:38.645593  309140 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-523696" [f109a232-83f4-49bc-b3b7-0f8a300a5715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:24:38.645600  309140 system_pods.go:61] "metrics-server-746fcd58dc-7lhll" [a52407fb-edc9-43bb-a659-054943380e3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:24:38.645605  309140 system_pods.go:61] "storage-provisioner" [4cc2a373-2f09-4f25-aebf-185a99197c9e] Running
	I0919 23:24:38.645613  309140 system_pods.go:74] duration metric: took 4.056969ms to wait for pod list to return data ...
	I0919 23:24:38.645627  309140 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:24:38.648705  309140 default_sa.go:45] found service account: "default"
	I0919 23:24:38.648727  309140 default_sa.go:55] duration metric: took 3.094985ms for default service account to be created ...
	I0919 23:24:38.648737  309140 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:24:38.653875  309140 system_pods.go:86] 9 kube-system pods found
	I0919 23:24:38.653910  309140 system_pods.go:89] "coredns-66bc5c9577-zjjk2" [403d55a0-6e25-4177-9a59-c6ea5792f38e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:24:38.653920  309140 system_pods.go:89] "etcd-default-k8s-diff-port-523696" [66d51094-8ff7-4164-9c50-41bac13011c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:24:38.653925  309140 system_pods.go:89] "kindnet-fkhtz" [8d0ba255-999f-4997-971c-6f4501b5a3c3] Running
	I0919 23:24:38.653931  309140 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-523696" [f312d723-9344-4643-8baf-fe8c06960175] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:24:38.653937  309140 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-523696" [fc4102ec-4d70-4cd9-9296-9cf081d83722] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:24:38.653942  309140 system_pods.go:89] "kube-proxy-wfzfz" [f616d499-194a-4158-b1f6-c5850de50d2c] Running
	I0919 23:24:38.653946  309140 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-523696" [f109a232-83f4-49bc-b3b7-0f8a300a5715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:24:38.653951  309140 system_pods.go:89] "metrics-server-746fcd58dc-7lhll" [a52407fb-edc9-43bb-a659-054943380e3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:24:38.653955  309140 system_pods.go:89] "storage-provisioner" [4cc2a373-2f09-4f25-aebf-185a99197c9e] Running
	I0919 23:24:38.653961  309140 system_pods.go:126] duration metric: took 5.219896ms to wait for k8s-apps to be running ...
	I0919 23:24:38.653968  309140 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:24:38.654008  309140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:24:38.667026  309140 system_svc.go:56] duration metric: took 13.050178ms WaitForService to wait for kubelet
	I0919 23:24:38.667055  309140 kubeadm.go:578] duration metric: took 3.828523562s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:24:38.667079  309140 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:24:38.669937  309140 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:24:38.669966  309140 node_conditions.go:123] node cpu capacity is 8
	I0919 23:24:38.669982  309140 node_conditions.go:105] duration metric: took 2.897169ms to run NodePressure ...
	I0919 23:24:38.669995  309140 start.go:241] waiting for startup goroutines ...
	I0919 23:24:38.670005  309140 start.go:246] waiting for cluster config update ...
	I0919 23:24:38.670023  309140 start.go:255] writing updated cluster config ...
	I0919 23:24:38.670401  309140 ssh_runner.go:195] Run: rm -f paused
	I0919 23:24:38.674248  309140 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:24:38.678720  309140 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zjjk2" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:24:40.684410  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:39.947267  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:41.947347  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:40.736012  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:43.235527  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:42.685295  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:45.185381  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:44.447349  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:46.946250  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:45.236721  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:47.734999  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:49.736720  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:47.684720  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:50.184176  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:48.950026  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:51.447161  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:52.235290  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:54.235773  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:52.185812  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:54.684217  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	I0919 23:24:56.735208  303572 pod_ready.go:94] pod "coredns-66bc5c9577-zwdn4" is "Ready"
	I0919 23:24:56.735234  303572 pod_ready.go:86] duration metric: took 36.005668237s for pod "coredns-66bc5c9577-zwdn4" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.738054  303572 pod_ready.go:83] waiting for pod "etcd-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.742341  303572 pod_ready.go:94] pod "etcd-embed-certs-756077" is "Ready"
	I0919 23:24:56.742366  303572 pod_ready.go:86] duration metric: took 4.270316ms for pod "etcd-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.744379  303572 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.748089  303572 pod_ready.go:94] pod "kube-apiserver-embed-certs-756077" is "Ready"
	I0919 23:24:56.748134  303572 pod_ready.go:86] duration metric: took 3.733649ms for pod "kube-apiserver-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.749913  303572 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.934219  303572 pod_ready.go:94] pod "kube-controller-manager-embed-certs-756077" is "Ready"
	I0919 23:24:56.934252  303572 pod_ready.go:86] duration metric: took 184.319914ms for pod "kube-controller-manager-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:57.133339  303572 pod_ready.go:83] waiting for pod "kube-proxy-225f8" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:57.533969  303572 pod_ready.go:94] pod "kube-proxy-225f8" is "Ready"
	I0919 23:24:57.534005  303572 pod_ready.go:86] duration metric: took 400.632976ms for pod "kube-proxy-225f8" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:57.733589  303572 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:58.133832  303572 pod_ready.go:94] pod "kube-scheduler-embed-certs-756077" is "Ready"
	I0919 23:24:58.133856  303572 pod_ready.go:86] duration metric: took 400.242784ms for pod "kube-scheduler-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:58.133867  303572 pod_ready.go:40] duration metric: took 37.408436087s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:24:58.179868  303572 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:24:58.183177  303572 out.go:179] * Done! kubectl is now configured to use "embed-certs-756077" cluster and "default" namespace by default
	W0919 23:24:53.947140  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:56.447077  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:58.447147  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:56.684835  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:59.184368  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:25:00.947033  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:25:02.947241  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:25:01.684242  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:25:04.184456  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:25:05.447051  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:25:07.447339  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:25:06.684449  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:25:09.184186  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:25:11.184293  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.720819578Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e3cb7365-9523-419e-8fa9-3935e3a1422b name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.721551443Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ac6c9983-6bba-46b2-92bc-1e3ca3c439aa name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.721767569Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ac6c9983-6bba-46b2-92bc-1e3ca3c439aa name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.722691428Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=10e0ff66-974b-4b33-828e-981a0d1991eb name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.722797837Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.736012044Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6e469719d47a6fa21be510ae3bed856c58e4b67d7302b45f8695fee54aa49c31/merged/etc/passwd: no such file or directory"
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.736063295Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6e469719d47a6fa21be510ae3bed856c58e4b67d7302b45f8695fee54aa49c31/merged/etc/group: no such file or directory"
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.790328152Z" level=info msg="Created container 945fba0cfc75cfa72d89c811e40790826c3fad7b17490d6249775590f6f464fe: kube-system/storage-provisioner/storage-provisioner" id=10e0ff66-974b-4b33-828e-981a0d1991eb name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.790983635Z" level=info msg="Starting container: 945fba0cfc75cfa72d89c811e40790826c3fad7b17490d6249775590f6f464fe" id=00e284c4-1cb0-41af-85ee-e3869dda74fa name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.798908220Z" level=info msg="Started container" PID=2160 containerID=945fba0cfc75cfa72d89c811e40790826c3fad7b17490d6249775590f6f464fe description=kube-system/storage-provisioner/storage-provisioner id=00e284c4-1cb0-41af-85ee-e3869dda74fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=95c170e4441a33a98f7980e958245ad2e613c850b6167efde40601c94882a197
	Sep 19 23:24:59 embed-certs-756077 crio[561]: time="2025-09-19 23:24:59.599170540Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=53f1fa4f-679f-4154-a9a1-1773d7555cae name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:24:59 embed-certs-756077 crio[561]: time="2025-09-19 23:24:59.599499158Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=53f1fa4f-679f-4154-a9a1-1773d7555cae name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:24:59 embed-certs-756077 crio[561]: time="2025-09-19 23:24:59.600212282Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=e0af031d-cbe7-4f66-9c00-4a56b9ddf807 name=/runtime.v1.ImageService/PullImage
	Sep 19 23:24:59 embed-certs-756077 crio[561]: time="2025-09-19 23:24:59.643604669Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.599533375Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c94346c1-8ad8-4253-9c34-9959b3e1c8ee name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.599800018Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c94346c1-8ad8-4253-9c34-9959b3e1c8ee name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.600688210Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6b37900d-0ac7-476f-a9d0-e0de68d01921 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.600895366Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6b37900d-0ac7-476f-a9d0-e0de68d01921 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.601805224Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vxck9/dashboard-metrics-scraper" id=5fac2ab7-5478-4c30-8f60-83ffff473611 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.601918034Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.672079210Z" level=info msg="Created container 689221032a2630c7c58af4630342739f139f729536274c5f953ebd04d737ca46: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vxck9/dashboard-metrics-scraper" id=5fac2ab7-5478-4c30-8f60-83ffff473611 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.672839914Z" level=info msg="Starting container: 689221032a2630c7c58af4630342739f139f729536274c5f953ebd04d737ca46" id=017e16b5-9d54-42c1-af28-93e8e9d7f784 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.681038890Z" level=info msg="Started container" PID=2223 containerID=689221032a2630c7c58af4630342739f139f729536274c5f953ebd04d737ca46 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vxck9/dashboard-metrics-scraper id=017e16b5-9d54-42c1-af28-93e8e9d7f784 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a8ab8fb06936b78a7eaec2eba829e57c5ced1c28b0b82382dd0441939029f5e5
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.750031261Z" level=info msg="Removing container: 5b172bbc722045512a0cefe25f6a1fd3865401c873fbe58f7625735533618847" id=af61a01b-775f-434f-9ba8-b87c58297e28 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.768297610Z" level=info msg="Removed container 5b172bbc722045512a0cefe25f6a1fd3865401c873fbe58f7625735533618847: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vxck9/dashboard-metrics-scraper" id=af61a01b-775f-434f-9ba8-b87c58297e28 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD
	689221032a263       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           11 seconds ago      Exited              dashboard-metrics-scraper   3                   a8ab8fb06936b       dashboard-metrics-scraper-6ffb444bf9-vxck9
	945fba0cfc75c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           22 seconds ago      Running             storage-provisioner         2                   95c170e4441a3       storage-provisioner
	e051b85cf0394       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   44 seconds ago      Running             kubernetes-dashboard        0                   aff0cc0db2f0e       kubernetes-dashboard-855c9754f9-x74ct
	f4e22cd5306b9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           53 seconds ago      Running             coredns                     1                   e30a38036cea8       coredns-66bc5c9577-zwdn4
	3fe3e5ea53116       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           53 seconds ago      Running             busybox                     1                   7318abea2bab3       busybox
	70b10a10b2d7f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           53 seconds ago      Running             kindnet-cni                 1                   95cdab2360ac1       kindnet-ts4kx
	1efebf99a4067       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           53 seconds ago      Exited              storage-provisioner         1                   95c170e4441a3       storage-provisioner
	de878a7c62a14       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                           53 seconds ago      Running             kube-proxy                  1                   433e1f76cea2c       kube-proxy-225f8
	cd44e5b995012       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                           57 seconds ago      Running             kube-controller-manager     1                   d4584889067e1       kube-controller-manager-embed-certs-756077
	9032dfc131c66       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                           57 seconds ago      Running             kube-apiserver              1                   886ff72149aed       kube-apiserver-embed-certs-756077
	f2e79ffd44d93       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        1                   afa1b0c0ea0ce       etcd-embed-certs-756077
	480cfa6126d62       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                           57 seconds ago      Running             kube-scheduler              1                   e53a586b1874d       kube-scheduler-embed-certs-756077
	
	
	==> coredns [f4e22cd5306b904f82c2ca26c269a044b12f70ed14c1edbb5303068498f73b82] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50886 - 6299 "HINFO IN 614489967846936490.7588539253947482831. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.015410981s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-756077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-756077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=embed-certs-756077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_22_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:22:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-756077
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:25:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:24:50 +0000   Fri, 19 Sep 2025 23:22:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:24:50 +0000   Fri, 19 Sep 2025 23:22:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:24:50 +0000   Fri, 19 Sep 2025 23:22:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:24:50 +0000   Fri, 19 Sep 2025 23:23:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-756077
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc8130e1399c4068bb9500ef3a03b5ff
	  System UUID:                31f9854e-9bf9-4035-8a48-847a7779033e
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-zwdn4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m19s
	  kube-system                 etcd-embed-certs-756077                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m25s
	  kube-system                 kindnet-ts4kx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m19s
	  kube-system                 kube-apiserver-embed-certs-756077             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-embed-certs-756077    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-proxy-225f8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kube-system                 kube-scheduler-embed-certs-756077             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 metrics-server-746fcd58dc-8gz2l               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         83s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m19s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vxck9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-x74ct         0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m18s              kube-proxy       
	  Normal  Starting                 53s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m25s              kubelet          Node embed-certs-756077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m25s              kubelet          Node embed-certs-756077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m25s              kubelet          Node embed-certs-756077 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m25s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m20s              node-controller  Node embed-certs-756077 event: Registered Node embed-certs-756077 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-756077 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node embed-certs-756077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node embed-certs-756077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node embed-certs-756077 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           52s                node-controller  Node embed-certs-756077 event: Registered Node embed-certs-756077 in Controller
	  Normal  Starting                 1s                 kubelet          Starting kubelet.
	  Normal  Starting                 0s                 kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 23:21] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +2.000740] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.000000] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999317] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.501476] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.499982] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999149] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.001177] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.997827] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.502489] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.499017] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999122] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.003267] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.996866] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.503800] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	
	
	==> etcd [f2e79ffd44d93ee3fe15b2873aba29bdfbf793ab2a6b29508fc474c82504d5f2] <==
	{"level":"warn","ts":"2025-09-19T23:24:18.370343Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42070","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.379291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.385738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.393855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.400803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.408599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.415027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.421521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.429046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.436231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.443925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.453339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.462191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.468870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.476012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.484757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.492520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.500250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.507123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.515170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.523067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.535780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.543702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.552386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.605818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42508","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 23:25:14 up  2:07,  0 users,  load average: 3.65, 3.21, 2.14
	Linux embed-certs-756077 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [70b10a10b2d7f763143bec483cc6467ce648f9daec7bf3b72a3ffefdcac7b0b4] <==
	I0919 23:24:21.365206       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:24:21.365446       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0919 23:24:21.365609       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:24:21.365628       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:24:21.365652       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:24:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:24:21.568361       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:24:21.568392       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:24:21.568406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:24:21.568546       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:24:21.869496       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:24:21.869526       1 metrics.go:72] Registering metrics
	I0919 23:24:21.869598       1 controller.go:711] "Syncing nftables rules"
	I0919 23:24:31.568760       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:24:31.568797       1 main.go:301] handling current node
	I0919 23:24:41.571183       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:24:41.571244       1 main.go:301] handling current node
	I0919 23:24:51.568195       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:24:51.568226       1 main.go:301] handling current node
	I0919 23:25:01.568229       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:25:01.568267       1 main.go:301] handling current node
	I0919 23:25:11.922425       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:25:11.922464       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9032dfc131c662ca6ba1220fa3726c4894f3d6745e84069ce021f410ca21d2f8] <==
	W0919 23:24:20.134403       1 handler_proxy.go:99] no RequestInfo found in the context
	E0919 23:24:20.134453       1 controller.go:113] "Unhandled Error" err="loading OpenAPI spec for \"v1beta1.metrics.k8s.io\" failed with: Error, could not get list of group versions for APIService" logger="UnhandledError"
	I0919 23:24:20.135326       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0919 23:24:22.599127       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I0919 23:24:22.846985       1 controller.go:667] quota admission added evaluator for: endpoints
	I0919 23:24:23.046961       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0919 23:24:23.046961       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	{"level":"warn","ts":"2025-09-19T23:25:13.352060Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0006bba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 23:25:13.361340       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 7.770138ms, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 23:25:13.353527       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-09-19T23:25:13.353154Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0027b21e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	{"level":"warn","ts":"2025-09-19T23:25:13.353305Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc001c6e1e0/127.0.0.1:2379","method":"/etcdserverpb.KV/Range","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 23:25:13.361718       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0919 23:25:13.361800       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"context canceled\"}: context canceled" logger="UnhandledError"
	E0919 23:25:13.362556       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:13.362783       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:13.362868       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:13.363827       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:13.363852       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:13.363952       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:13.363990       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:13.365191       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="12.105681ms" method="GET" path="/api/v1/nodes/embed-certs-756077" result=null
	E0919 23:25:13.365273       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="11.985888ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	E0919 23:25:13.365324       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:13.366659       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="13.529005ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-756077" result=null
	
	
	==> kube-controller-manager [cd44e5b99501296d31ff10da499f8768c2e51e6a3a00597904da146bba3de464] <==
	I0919 23:24:22.443613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 23:24:22.443627       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 23:24:22.443660       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:24:22.443687       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:24:22.443795       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:24:22.444379       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 23:24:22.444405       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:24:22.444848       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:24:22.445939       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 23:24:22.446016       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 23:24:22.446130       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:24:22.446187       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 23:24:22.450450       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:24:22.455804       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 23:24:22.458976       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:24:22.465148       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 23:24:22.465231       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 23:24:22.465272       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 23:24:22.465278       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 23:24:22.465283       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 23:24:22.472311       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:24:22.476565       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:24:22.478690       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	E0919 23:24:52.466183       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:24:52.483648       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [de878a7c62a1413a02b59a1081c2a518f93e4c981f7a27afc1c7fd20e5770499] <==
	I0919 23:24:21.169802       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:24:21.234791       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:24:21.334985       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:24:21.335031       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0919 23:24:21.335170       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:24:21.358336       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:24:21.358412       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:24:21.365768       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:24:21.366236       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:24:21.366273       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:24:21.367839       1 config.go:200] "Starting service config controller"
	I0919 23:24:21.367861       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:24:21.367894       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:24:21.367900       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:24:21.367914       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:24:21.367920       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:24:21.368152       1 config.go:309] "Starting node config controller"
	I0919 23:24:21.368216       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:24:21.368243       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:24:21.468363       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:24:21.468363       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 23:24:21.468426       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [480cfa6126d628ba560d44d5d8b51dd3f1945d2e876917a1761adfb2f06b0e3b] <==
	I0919 23:24:17.657488       1 serving.go:386] Generated self-signed cert in-memory
	W0919 23:24:19.063230       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 23:24:19.063363       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 23:24:19.063381       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 23:24:19.063392       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 23:24:19.100053       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:24:19.100092       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:24:19.103151       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:24:19.103212       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:24:19.103771       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:24:19.103947       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:24:19.203828       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: E0919 23:25:14.284656    2774 file_linux.go:61] "Unable to read config path" err="unable to create inotify: too many open files" path="/etc/kubernetes/manifests"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.286016    2774 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="cri-o" version="1.24.6" apiVersion="v1"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.286735    2774 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.286785    2774 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: E0919 23:25:14.286848    2774 plugins.go:580] "Error initializing dynamic plugin prober" err="error initializing watcher: too many open files"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.287844    2774 server.go:1262] "Started kubelet"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.288007    2774 server.go:180] "Starting to listen" address="0.0.0.0" port=10250
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.288184    2774 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.288309    2774 server_v1.go:49] "podresources" method="list" useActivePods=true
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.288624    2774 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.295143    2774 server.go:310] "Adding debug handlers to kubelet server"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.297903    2774 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.298564    2774 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: E0919 23:25:14.298635    2774 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.303301    2774 volume_manager.go:313] "Starting Kubelet Volume Manager"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: E0919 23:25:14.303587    2774 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"embed-certs-756077\" not found"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.307138    2774 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.308574    2774 reconciler.go:29] "Reconciler: start to sync state"
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.308736    2774 factory.go:223] Registration of the systemd container factory successfully
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.309269    2774 factory.go:223] Registration of the crio container factory successfully
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: I0919 23:25:14.310177    2774 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: E0919 23:25:14.310378    2774 manager.go:294] Registration of the raw container factory failed: inotify_init: too many open files
	Sep 19 23:25:14 embed-certs-756077 kubelet[2774]: E0919 23:25:14.310556    2774 kubelet.go:1686] "Failed to start cAdvisor" err="inotify_init: too many open files"
	Sep 19 23:25:14 embed-certs-756077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 19 23:25:14 embed-certs-756077 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	
	==> kubernetes-dashboard [e051b85cf03946dded3f9bf77644a87b7922c52954281fb05391ea42f51b01e1] <==
	2025/09/19 23:24:29 Using namespace: kubernetes-dashboard
	2025/09/19 23:24:29 Using in-cluster config to connect to apiserver
	2025/09/19 23:24:29 Using secret token for csrf signing
	2025/09/19 23:24:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:24:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:24:29 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 23:24:29 Generating JWE encryption key
	2025/09/19 23:24:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:24:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:24:30 Initializing JWE encryption key from synchronized object
	2025/09/19 23:24:30 Creating in-cluster Sidecar client
	2025/09/19 23:24:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:24:30 Serving insecurely on HTTP port: 9090
	2025/09/19 23:25:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:24:29 Starting overwatch
	
	
	==> storage-provisioner [1efebf99a406777932ec2bbd2fa1743c46bcec1ce7aa44447249d1ee5211f846] <==
	I0919 23:24:21.152215       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:24:51.155011       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [945fba0cfc75cfa72d89c811e40790826c3fad7b17490d6249775590f6f464fe] <==
	I0919 23:24:51.809728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 23:24:51.817605       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 23:24:51.817652       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0919 23:24:51.820001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:24:55.275809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:24:59.535843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:03.134914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:06.188766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:09.211305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:09.217642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:25:09.217840       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 23:25:09.218047       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-756077_6c91a282-ba5f-42aa-aabf-24ef5336adfc!
	I0919 23:25:09.218048       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"882b713e-23d7-418e-98f1-b46e56a60afb", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-756077_6c91a282-ba5f-42aa-aabf-24ef5336adfc became leader
	W0919 23:25:09.220199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:09.224967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:25:09.319055       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-756077_6c91a282-ba5f-42aa-aabf-24ef5336adfc!
	W0919 23:25:12.038113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:12.044740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:14.048375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:14.052832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756077 -n embed-certs-756077
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756077 -n embed-certs-756077: exit status 2 (330.350887ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-756077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-8gz2l
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-756077 describe pod metrics-server-746fcd58dc-8gz2l
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-756077 describe pod metrics-server-746fcd58dc-8gz2l: exit status 1 (84.052686ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-8gz2l" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-756077 describe pod metrics-server-746fcd58dc-8gz2l: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-756077
helpers_test.go:243: (dbg) docker inspect embed-certs-756077:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7",
	        "Created": "2025-09-19T23:22:33.244479146Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304188,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-09-19T23:24:10.159311961Z",
	            "FinishedAt": "2025-09-19T23:24:09.172092572Z"
	        },
	        "Image": "sha256:c6b5532e987b5b4f5fc9cb0336e378ed49c0542bad8cbfc564b71e977a6269de",
	        "ResolvConfPath": "/var/lib/docker/containers/d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7/hosts",
	        "LogPath": "/var/lib/docker/containers/d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7/d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7-json.log",
	        "Name": "/embed-certs-756077",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-756077:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-756077",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5747027b27d9e51a529e856f16c866e301154b7c4e67db209d2d20be60846e7",
	                "LowerDir": "/var/lib/docker/overlay2/adc2aea0ff6318d47f04da7f67df9bdec1dfc3f6dec5d18c7ad7ffc5d0ec974b-init/diff:/var/lib/docker/overlay2/8b328baedd0975755f70dca20dfd77e6c050eb383751ad136e72a8db16d2a6ad/diff",
	                "MergedDir": "/var/lib/docker/overlay2/adc2aea0ff6318d47f04da7f67df9bdec1dfc3f6dec5d18c7ad7ffc5d0ec974b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/adc2aea0ff6318d47f04da7f67df9bdec1dfc3f6dec5d18c7ad7ffc5d0ec974b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/adc2aea0ff6318d47f04da7f67df9bdec1dfc3f6dec5d18c7ad7ffc5d0ec974b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-756077",
	                "Source": "/var/lib/docker/volumes/embed-certs-756077/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-756077",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-756077",
	                "name.minikube.sigs.k8s.io": "embed-certs-756077",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "df6252335d681e947a55f27b707783c4ba4815ae889d236371c4a40f1c6dadb4",
	            "SandboxKey": "/var/run/docker/netns/df6252335d68",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-756077": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0e:67:ef:09:0b:04",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "7e7ff6da14c68ba8ea1175fd14863904b7ad0b6597f22825ec68236b0665d3cb",
	                    "EndpointID": "fe2bee87bd1d31a7d45967d52f408ccc6ebf40d890032e0e3f2dafff1b1b7280",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-756077",
	                        "d5747027b27d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756077 -n embed-certs-756077
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756077 -n embed-certs-756077: exit status 2 (495.118298ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-756077 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-756077 logs -n 25: (1.924648801s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p no-preload-042753                                                                                                                                                                                                                          │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ delete  │ -p no-preload-042753                                                                                                                                                                                                                          │ no-preload-042753            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ start   │ -p newest-cni-734532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ addons  │ enable metrics-server -p newest-cni-734532 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ stop    │ -p newest-cni-734532 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ addons  │ enable dashboard -p newest-cni-734532 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ start   │ -p newest-cni-734532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0 │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ addons  │ enable metrics-server -p embed-certs-756077 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ stop    │ -p embed-certs-756077 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:24 UTC │
	│ image   │ newest-cni-734532 image list --format=json                                                                                                                                                                                                    │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ pause   │ -p newest-cni-734532 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:23 UTC │
	│ unpause │ -p newest-cni-734532 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:23 UTC │ 19 Sep 25 23:24 UTC │
	│ delete  │ -p newest-cni-734532                                                                                                                                                                                                                          │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ delete  │ -p newest-cni-734532                                                                                                                                                                                                                          │ newest-cni-734532            │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ start   │ -p auto-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-781969                  │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:25 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-523696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-523696 │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ stop    │ -p default-k8s-diff-port-523696 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-523696 │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ addons  │ enable dashboard -p embed-certs-756077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ start   │ -p embed-certs-756077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                                        │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-523696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-523696 │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:24 UTC │
	│ start   │ -p default-k8s-diff-port-523696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0                                                                      │ default-k8s-diff-port-523696 │ jenkins │ v1.37.0 │ 19 Sep 25 23:24 UTC │ 19 Sep 25 23:25 UTC │
	│ image   │ embed-certs-756077 image list --format=json                                                                                                                                                                                                   │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ pause   │ -p embed-certs-756077 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ unpause │ -p embed-certs-756077 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-756077           │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	│ ssh     │ -p auto-781969 pgrep -a kubelet                                                                                                                                                                                                               │ auto-781969                  │ jenkins │ v1.37.0 │ 19 Sep 25 23:25 UTC │ 19 Sep 25 23:25 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 23:24:26
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 23:24:26.283657  309140 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:24:26.283815  309140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:24:26.283823  309140 out.go:374] Setting ErrFile to fd 2...
	I0919 23:24:26.283829  309140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:24:26.284132  309140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 23:24:26.284765  309140 out.go:368] Setting JSON to false
	I0919 23:24:26.286679  309140 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7616,"bootTime":1758316650,"procs":681,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:24:26.286825  309140 start.go:140] virtualization: kvm guest
	I0919 23:24:26.289274  309140 out.go:179] * [default-k8s-diff-port-523696] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:24:26.290737  309140 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:24:26.290736  309140 notify.go:220] Checking for updates...
	I0919 23:24:26.295556  309140 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:24:26.297180  309140 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:24:26.298865  309140 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 23:24:26.300298  309140 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:24:26.301687  309140 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:24:26.304857  309140 config.go:182] Loaded profile config "default-k8s-diff-port-523696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:24:26.305618  309140 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:24:26.338118  309140 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:24:26.338297  309140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:24:26.421312  309140 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:24:26.407742769 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:24:26.421477  309140 docker.go:318] overlay module found
	I0919 23:24:26.424827  309140 out.go:179] * Using the docker driver based on existing profile
	I0919 23:24:26.426187  309140 start.go:304] selected driver: docker
	I0919 23:24:26.426204  309140 start.go:918] validating driver "docker" against &{Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName
:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:24:26.426288  309140 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:24:26.427008  309140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:24:26.505958  309140 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:73 OomKillDisable:false NGoroutines:77 SystemTime:2025-09-19 23:24:26.488620346 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:24:26.506591  309140 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:24:26.506637  309140 cni.go:84] Creating CNI manager for ""
	I0919 23:24:26.506721  309140 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:24:26.506790  309140 start.go:348] cluster config:
	{Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:24:26.511950  309140 out.go:179] * Starting "default-k8s-diff-port-523696" primary control-plane node in "default-k8s-diff-port-523696" cluster
	I0919 23:24:26.513789  309140 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 23:24:26.515490  309140 out.go:179] * Pulling base image v0.0.48 ...
	I0919 23:24:26.517264  309140 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:24:26.517329  309140 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
	I0919 23:24:26.517343  309140 cache.go:58] Caching tarball of preloaded images
	I0919 23:24:26.517377  309140 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 23:24:26.517443  309140 preload.go:172] Found /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0919 23:24:26.517458  309140 cache.go:61] Finished verifying existence of preloaded tar for v1.34.0 on crio
	I0919 23:24:26.517589  309140 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/config.json ...
	I0919 23:24:26.543420  309140 image.go:100] Found gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon, skipping pull
	I0919 23:24:26.543441  309140 cache.go:147] gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 exists in daemon, skipping load
	I0919 23:24:26.543462  309140 cache.go:232] Successfully downloaded all kic artifacts
	I0919 23:24:26.543491  309140 start.go:360] acquireMachinesLock for default-k8s-diff-port-523696: {Name:mk3e8cf47fc7b3222021a2ea03ba5708af5f316a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0919 23:24:26.543572  309140 start.go:364] duration metric: took 48.565µs to acquireMachinesLock for "default-k8s-diff-port-523696"
	I0919 23:24:26.543596  309140 start.go:96] Skipping create...Using existing machine configuration
	I0919 23:24:26.543606  309140 fix.go:54] fixHost starting: 
	I0919 23:24:26.543824  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:26.564538  309140 fix.go:112] recreateIfNeeded on default-k8s-diff-port-523696: state=Stopped err=<nil>
	W0919 23:24:26.564567  309140 fix.go:138] unexpected machine state, will restart: <nil>
	I0919 23:24:26.037631  302093 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0919 23:24:26.042904  302093 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.0/kubectl ...
	I0919 23:24:26.042933  302093 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0919 23:24:26.067470  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0919 23:24:26.343569  302093 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0919 23:24:26.343643  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:26.343679  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-781969 minikube.k8s.io/updated_at=2025_09_19T23_24_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53 minikube.k8s.io/name=auto-781969 minikube.k8s.io/primary=true
	I0919 23:24:26.353776  302093 ops.go:34] apiserver oom_adj: -16
	I0919 23:24:26.468611  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:26.969359  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:27.468754  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:27.969245  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:28.468965  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	W0919 23:24:26.737927  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:29.236370  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	I0919 23:24:28.969587  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:29.468825  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:29.969314  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:30.469035  302093 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0919 23:24:30.557900  302093 kubeadm.go:1105] duration metric: took 4.214314591s to wait for elevateKubeSystemPrivileges
	I0919 23:24:30.557940  302093 kubeadm.go:394] duration metric: took 15.744021415s to StartCluster
	I0919 23:24:30.557961  302093 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:30.558072  302093 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:24:30.560227  302093 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:30.560534  302093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0919 23:24:30.560543  302093 start.go:235] Will wait 15m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:24:30.560657  302093 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:24:30.560739  302093 addons.go:69] Setting storage-provisioner=true in profile "auto-781969"
	I0919 23:24:30.560757  302093 addons.go:238] Setting addon storage-provisioner=true in "auto-781969"
	I0919 23:24:30.560783  302093 host.go:66] Checking if "auto-781969" exists ...
	I0919 23:24:30.560798  302093 config.go:182] Loaded profile config "auto-781969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:24:30.560935  302093 addons.go:69] Setting default-storageclass=true in profile "auto-781969"
	I0919 23:24:30.560950  302093 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-781969"
	I0919 23:24:30.561276  302093 cli_runner.go:164] Run: docker container inspect auto-781969 --format={{.State.Status}}
	I0919 23:24:30.561311  302093 cli_runner.go:164] Run: docker container inspect auto-781969 --format={{.State.Status}}
	I0919 23:24:30.564238  302093 out.go:179] * Verifying Kubernetes components...
	I0919 23:24:30.565938  302093 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:24:30.587926  302093 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:24:26.904176  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:24:26.904556  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:24:26.904610  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:24:26.904659  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:24:26.955554  257816 cri.go:89] found id: "5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:26.955575  257816 cri.go:89] found id: ""
	I0919 23:24:26.955584  257816 logs.go:282] 1 containers: [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4]
	I0919 23:24:26.955643  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:26.960635  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:24:26.960713  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:24:27.010249  257816 cri.go:89] found id: ""
	I0919 23:24:27.010280  257816 logs.go:282] 0 containers: []
	W0919 23:24:27.010289  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:24:27.010297  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:24:27.010353  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:24:27.071442  257816 cri.go:89] found id: ""
	I0919 23:24:27.071470  257816 logs.go:282] 0 containers: []
	W0919 23:24:27.071482  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:24:27.071489  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:24:27.071558  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:24:27.132397  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:27.132470  257816 cri.go:89] found id: ""
	I0919 23:24:27.132485  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:24:27.132538  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:27.137314  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:24:27.137390  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:24:27.206185  257816 cri.go:89] found id: ""
	I0919 23:24:27.206216  257816 logs.go:282] 0 containers: []
	W0919 23:24:27.206228  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:24:27.206235  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:24:27.206291  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:24:27.249808  257816 cri.go:89] found id: "7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:27.249832  257816 cri.go:89] found id: ""
	I0919 23:24:27.249841  257816 logs.go:282] 1 containers: [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a]
	I0919 23:24:27.249907  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:27.255500  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:24:27.255568  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:24:27.297727  257816 cri.go:89] found id: ""
	I0919 23:24:27.297755  257816 logs.go:282] 0 containers: []
	W0919 23:24:27.297763  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:24:27.297769  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:24:27.297822  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:24:27.336933  257816 cri.go:89] found id: ""
	I0919 23:24:27.336966  257816 logs.go:282] 0 containers: []
	W0919 23:24:27.336976  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:24:27.336987  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:24:27.336998  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:24:27.390200  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:24:27.390234  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:24:27.434021  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:24:27.434049  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:24:27.555056  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:24:27.555095  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:24:27.574218  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:24:27.574248  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:24:27.646492  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:24:27.646519  257816 logs.go:123] Gathering logs for kube-apiserver [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4] ...
	I0919 23:24:27.646536  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:27.702869  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:24:27.702903  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:27.795346  257816 logs.go:123] Gathering logs for kube-controller-manager [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a] ...
	I0919 23:24:27.795440  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:30.338192  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:24:30.338777  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:24:30.338841  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:24:30.338909  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:24:30.382908  257816 cri.go:89] found id: "5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:30.382935  257816 cri.go:89] found id: ""
	I0919 23:24:30.382944  257816 logs.go:282] 1 containers: [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4]
	I0919 23:24:30.383005  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:30.388474  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:24:30.388560  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:24:30.435791  257816 cri.go:89] found id: ""
	I0919 23:24:30.435817  257816 logs.go:282] 0 containers: []
	W0919 23:24:30.435827  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:24:30.435834  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:24:30.435890  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:24:30.475322  257816 cri.go:89] found id: ""
	I0919 23:24:30.475352  257816 logs.go:282] 0 containers: []
	W0919 23:24:30.475384  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:24:30.475392  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:24:30.475457  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:24:30.528793  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:30.528817  257816 cri.go:89] found id: ""
	I0919 23:24:30.528825  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:24:30.528876  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:30.533808  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:24:30.533888  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:24:30.582002  257816 cri.go:89] found id: ""
	I0919 23:24:30.582044  257816 logs.go:282] 0 containers: []
	W0919 23:24:30.582055  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:24:30.582063  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:24:30.582161  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:24:30.642550  257816 cri.go:89] found id: "7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:30.642572  257816 cri.go:89] found id: ""
	I0919 23:24:30.642580  257816 logs.go:282] 1 containers: [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a]
	I0919 23:24:30.642622  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:30.646953  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:24:30.647029  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:24:30.701494  257816 cri.go:89] found id: ""
	I0919 23:24:30.701543  257816 logs.go:282] 0 containers: []
	W0919 23:24:30.701558  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:24:30.701565  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:24:30.701649  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:24:30.764368  257816 cri.go:89] found id: ""
	I0919 23:24:30.764462  257816 logs.go:282] 0 containers: []
	W0919 23:24:30.764486  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:24:30.764498  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:24:30.764513  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:24:30.792998  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:24:30.793048  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:24:30.893009  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:24:30.893031  257816 logs.go:123] Gathering logs for kube-apiserver [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4] ...
	I0919 23:24:30.893046  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:30.961638  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:24:30.961678  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:30.588985  302093 addons.go:238] Setting addon default-storageclass=true in "auto-781969"
	I0919 23:24:30.589035  302093 host.go:66] Checking if "auto-781969" exists ...
	I0919 23:24:30.589527  302093 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:24:30.589544  302093 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:24:30.589595  302093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-781969
	I0919 23:24:30.589784  302093 cli_runner.go:164] Run: docker container inspect auto-781969 --format={{.State.Status}}
	I0919 23:24:30.623768  302093 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:24:30.623861  302093 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:24:30.624016  302093 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-781969
	I0919 23:24:30.625118  302093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/auto-781969/id_rsa Username:docker}
	I0919 23:24:30.649823  302093 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/auto-781969/id_rsa Username:docker}
	I0919 23:24:30.675726  302093 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0919 23:24:30.717894  302093 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:24:30.779713  302093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:24:30.779764  302093 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:24:30.942091  302093 start.go:976] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I0919 23:24:30.943209  302093 node_ready.go:35] waiting up to 15m0s for node "auto-781969" to be "Ready" ...
	I0919 23:24:31.163669  302093 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I0919 23:24:26.566548  309140 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-523696" ...
	I0919 23:24:26.566615  309140 cli_runner.go:164] Run: docker start default-k8s-diff-port-523696
	I0919 23:24:26.870720  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:26.893342  309140 kic.go:430] container "default-k8s-diff-port-523696" state is running.
	I0919 23:24:26.894016  309140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:24:26.924729  309140 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/config.json ...
	I0919 23:24:26.925132  309140 machine.go:93] provisionDockerMachine start ...
	I0919 23:24:26.925209  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:26.948711  309140 main.go:141] libmachine: Using SSH client type: native
	I0919 23:24:26.949057  309140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:24:26.949077  309140 main.go:141] libmachine: About to run SSH command:
	hostname
	I0919 23:24:26.949781  309140 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52750->127.0.0.1:33109: read: connection reset by peer
	I0919 23:24:30.092067  309140 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523696
	
	I0919 23:24:30.092120  309140 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-523696"
	I0919 23:24:30.092185  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:30.112640  309140 main.go:141] libmachine: Using SSH client type: native
	I0919 23:24:30.112936  309140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:24:30.112953  309140 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-523696 && echo "default-k8s-diff-port-523696" | sudo tee /etc/hostname
	I0919 23:24:30.273791  309140 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-523696
	
	I0919 23:24:30.273872  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:30.292713  309140 main.go:141] libmachine: Using SSH client type: native
	I0919 23:24:30.292924  309140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:24:30.292946  309140 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-523696' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-523696/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-523696' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0919 23:24:30.434051  309140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0919 23:24:30.434082  309140 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21594-14668/.minikube CaCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21594-14668/.minikube}
	I0919 23:24:30.434121  309140 ubuntu.go:190] setting up certificates
	I0919 23:24:30.434133  309140 provision.go:84] configureAuth start
	I0919 23:24:30.434186  309140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:24:30.453901  309140 provision.go:143] copyHostCerts
	I0919 23:24:30.453969  309140 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem, removing ...
	I0919 23:24:30.453987  309140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem
	I0919 23:24:30.454091  309140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/cert.pem (1123 bytes)
	I0919 23:24:30.454257  309140 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem, removing ...
	I0919 23:24:30.454272  309140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem
	I0919 23:24:30.454317  309140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/key.pem (1675 bytes)
	I0919 23:24:30.454445  309140 exec_runner.go:144] found /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem, removing ...
	I0919 23:24:30.454458  309140 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem
	I0919 23:24:30.454497  309140 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21594-14668/.minikube/ca.pem (1078 bytes)
	I0919 23:24:30.454593  309140 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-523696 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-523696 localhost minikube]
	I0919 23:24:31.411856  309140 provision.go:177] copyRemoteCerts
	I0919 23:24:31.411911  309140 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0919 23:24:31.411952  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:31.430897  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:31.531843  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0919 23:24:31.558712  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0919 23:24:31.586368  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0919 23:24:31.612772  309140 provision.go:87] duration metric: took 1.178628147s to configureAuth
	I0919 23:24:31.612797  309140 ubuntu.go:206] setting minikube options for container-runtime
	I0919 23:24:31.612973  309140 config.go:182] Loaded profile config "default-k8s-diff-port-523696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:24:31.613078  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:31.632012  309140 main.go:141] libmachine: Using SSH client type: native
	I0919 23:24:31.632249  309140 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x840140] 0x842e40 <nil>  [] 0s} 127.0.0.1 33109 <nil> <nil>}
	I0919 23:24:31.632267  309140 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0919 23:24:31.935858  309140 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0919 23:24:31.935887  309140 machine.go:96] duration metric: took 5.010735102s to provisionDockerMachine
	I0919 23:24:31.935899  309140 start.go:293] postStartSetup for "default-k8s-diff-port-523696" (driver="docker")
	I0919 23:24:31.935912  309140 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0919 23:24:31.935968  309140 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0919 23:24:31.936005  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:31.956315  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:32.056792  309140 ssh_runner.go:195] Run: cat /etc/os-release
	I0919 23:24:32.061192  309140 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0919 23:24:32.061236  309140 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0919 23:24:32.061246  309140 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0919 23:24:32.061253  309140 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0919 23:24:32.061269  309140 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/addons for local assets ...
	I0919 23:24:32.061351  309140 filesync.go:126] Scanning /home/jenkins/minikube-integration/21594-14668/.minikube/files for local assets ...
	I0919 23:24:32.061458  309140 filesync.go:149] local asset: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem -> 181752.pem in /etc/ssl/certs
	I0919 23:24:32.061588  309140 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0919 23:24:32.072274  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /etc/ssl/certs/181752.pem (1708 bytes)
	I0919 23:24:32.099675  309140 start.go:296] duration metric: took 163.760515ms for postStartSetup
	I0919 23:24:32.099759  309140 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:24:32.099799  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:32.120088  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:32.213560  309140 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0919 23:24:32.218295  309140 fix.go:56] duration metric: took 5.67468432s for fixHost
	I0919 23:24:32.218318  309140 start.go:83] releasing machines lock for "default-k8s-diff-port-523696", held for 5.674733278s
	I0919 23:24:32.218384  309140 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-523696
	I0919 23:24:32.238458  309140 ssh_runner.go:195] Run: cat /version.json
	I0919 23:24:32.238503  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:32.238533  309140 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0919 23:24:32.238607  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:32.259048  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:32.259318  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:32.430569  309140 ssh_runner.go:195] Run: systemctl --version
	I0919 23:24:32.435992  309140 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0919 23:24:32.579156  309140 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0919 23:24:32.584528  309140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:24:32.595184  309140 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0919 23:24:32.595264  309140 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0919 23:24:32.605551  309140 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0919 23:24:32.605574  309140 start.go:495] detecting cgroup driver to use...
	I0919 23:24:32.605604  309140 detect.go:190] detected "systemd" cgroup driver on host os
	I0919 23:24:32.605650  309140 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0919 23:24:32.619391  309140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0919 23:24:32.633226  309140 docker.go:218] disabling cri-docker service (if available) ...
	I0919 23:24:32.633293  309140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0919 23:24:32.647738  309140 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0919 23:24:32.661157  309140 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0919 23:24:32.727607  309140 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0919 23:24:32.794064  309140 docker.go:234] disabling docker service ...
	I0919 23:24:32.794165  309140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0919 23:24:32.807868  309140 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0919 23:24:32.821190  309140 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0919 23:24:32.887281  309140 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0919 23:24:32.951652  309140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0919 23:24:32.964191  309140 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0919 23:24:32.981546  309140 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I0919 23:24:32.981600  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:32.992970  309140 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I0919 23:24:32.993034  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.004011  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.014603  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.025408  309140 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0919 23:24:33.035799  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.047602  309140 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.058615  309140 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0919 23:24:33.069264  309140 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0919 23:24:33.078304  309140 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0919 23:24:33.087611  309140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:24:33.153840  309140 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0919 23:24:33.713412  309140 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0919 23:24:33.713477  309140 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0919 23:24:33.717761  309140 start.go:563] Will wait 60s for crictl version
	I0919 23:24:33.717833  309140 ssh_runner.go:195] Run: which crictl
	I0919 23:24:33.721427  309140 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0919 23:24:33.762284  309140 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0919 23:24:33.762388  309140 ssh_runner.go:195] Run: crio --version
	I0919 23:24:33.803575  309140 ssh_runner.go:195] Run: crio --version
	I0919 23:24:33.848872  309140 out.go:179] * Preparing Kubernetes v1.34.0 on CRI-O 1.24.6 ...
	I0919 23:24:31.164849  302093 addons.go:514] duration metric: took 604.189926ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0919 23:24:31.446770  302093 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-781969" context rescaled to 1 replicas
	W0919 23:24:32.947496  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:31.735630  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:33.735738  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	I0919 23:24:33.850269  309140 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-523696 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0919 23:24:33.872231  309140 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0919 23:24:33.876658  309140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:24:33.890465  309140 kubeadm.go:875] updating cluster {Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:do
cker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0919 23:24:33.890565  309140 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
	I0919 23:24:33.890611  309140 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:24:33.937949  309140 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 23:24:33.937970  309140 crio.go:433] Images already preloaded, skipping extraction
	I0919 23:24:33.938010  309140 ssh_runner.go:195] Run: sudo crictl images --output json
	I0919 23:24:33.977639  309140 crio.go:514] all images are preloaded for cri-o runtime.
	I0919 23:24:33.977676  309140 cache_images.go:85] Images are preloaded, skipping loading
	I0919 23:24:33.977687  309140 kubeadm.go:926] updating node { 192.168.76.2 8444 v1.34.0 crio true true} ...
	I0919 23:24:33.977802  309140 kubeadm.go:938] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-523696 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0919 23:24:33.977887  309140 ssh_runner.go:195] Run: crio config
	I0919 23:24:34.031365  309140 cni.go:84] Creating CNI manager for ""
	I0919 23:24:34.031392  309140 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 23:24:34.031403  309140 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0919 23:24:34.031428  309140 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-523696 NodeName:default-k8s-diff-port-523696 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0919 23:24:34.031594  309140 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-523696"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0919 23:24:34.031661  309140 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.0
	I0919 23:24:34.042187  309140 binaries.go:44] Found k8s binaries, skipping transfer
	I0919 23:24:34.042257  309140 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0919 23:24:34.054318  309140 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I0919 23:24:34.075609  309140 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0919 23:24:34.097647  309140 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I0919 23:24:34.120339  309140 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0919 23:24:34.124618  309140 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0919 23:24:34.139295  309140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:24:34.212038  309140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:24:34.232186  309140 certs.go:68] Setting up /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696 for IP: 192.168.76.2
	I0919 23:24:34.232210  309140 certs.go:194] generating shared ca certs ...
	I0919 23:24:34.232230  309140 certs.go:226] acquiring lock for ca certs: {Name:mk0815e934ab9114b19d1d06a5ef4b2ac8466d13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:34.232372  309140 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key
	I0919 23:24:34.232412  309140 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key
	I0919 23:24:34.232423  309140 certs.go:256] generating profile certs ...
	I0919 23:24:34.232539  309140 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/client.key
	I0919 23:24:34.232622  309140 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key.3ddce01e
	I0919 23:24:34.232672  309140 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key
	I0919 23:24:34.232810  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem (1338 bytes)
	W0919 23:24:34.232834  309140 certs.go:480] ignoring /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175_empty.pem, impossibly tiny 0 bytes
	I0919 23:24:34.232841  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca-key.pem (1675 bytes)
	I0919 23:24:34.232860  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/ca.pem (1078 bytes)
	I0919 23:24:34.232878  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/cert.pem (1123 bytes)
	I0919 23:24:34.232899  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/certs/key.pem (1675 bytes)
	I0919 23:24:34.232950  309140 certs.go:484] found cert: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem (1708 bytes)
	I0919 23:24:34.233712  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0919 23:24:34.268742  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0919 23:24:34.300594  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0919 23:24:34.337147  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0919 23:24:34.373002  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0919 23:24:34.405402  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0919 23:24:34.434351  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0919 23:24:34.465027  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/default-k8s-diff-port-523696/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0919 23:24:34.491232  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/certs/18175.pem --> /usr/share/ca-certificates/18175.pem (1338 bytes)
	I0919 23:24:34.520175  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/ssl/certs/181752.pem --> /usr/share/ca-certificates/181752.pem (1708 bytes)
	I0919 23:24:34.546793  309140 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0919 23:24:34.579206  309140 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0919 23:24:34.600687  309140 ssh_runner.go:195] Run: openssl version
	I0919 23:24:34.606839  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0919 23:24:34.617629  309140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:24:34.621401  309140 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 19 22:14 /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:24:34.621464  309140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0919 23:24:34.628338  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0919 23:24:34.637814  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/18175.pem && ln -fs /usr/share/ca-certificates/18175.pem /etc/ssl/certs/18175.pem"
	I0919 23:24:34.648424  309140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/18175.pem
	I0919 23:24:34.652983  309140 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Sep 19 22:20 /usr/share/ca-certificates/18175.pem
	I0919 23:24:34.653057  309140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/18175.pem
	I0919 23:24:34.660990  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/18175.pem /etc/ssl/certs/51391683.0"
	I0919 23:24:34.670822  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/181752.pem && ln -fs /usr/share/ca-certificates/181752.pem /etc/ssl/certs/181752.pem"
	I0919 23:24:34.681542  309140 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/181752.pem
	I0919 23:24:34.685776  309140 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Sep 19 22:20 /usr/share/ca-certificates/181752.pem
	I0919 23:24:34.685838  309140 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/181752.pem
	I0919 23:24:34.692846  309140 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/181752.pem /etc/ssl/certs/3ec20f2e.0"
	I0919 23:24:34.703123  309140 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0919 23:24:34.707402  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0919 23:24:34.714339  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0919 23:24:34.721673  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0919 23:24:34.728622  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0919 23:24:34.735988  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0919 23:24:34.744110  309140 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0919 23:24:34.754243  309140 kubeadm.go:392] StartCluster: {Name:default-k8s-diff-port-523696 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:default-k8s-diff-port-523696 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServe
rNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docke
r MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 23:24:34.754341  309140 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0919 23:24:34.754401  309140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0919 23:24:34.798434  309140 cri.go:89] found id: ""
	I0919 23:24:34.798547  309140 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0919 23:24:34.810288  309140 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0919 23:24:34.810308  309140 kubeadm.go:589] restartPrimaryControlPlane start ...
	I0919 23:24:34.810356  309140 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0919 23:24:34.820696  309140 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:24:34.821738  309140 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-523696" does not appear in /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:24:34.822397  309140 kubeconfig.go:62] /home/jenkins/minikube-integration/21594-14668/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-523696" cluster setting kubeconfig missing "default-k8s-diff-port-523696" context setting]
	I0919 23:24:34.823789  309140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:34.826318  309140 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0919 23:24:34.836440  309140 kubeadm.go:626] The running cluster does not require reconfiguration: 192.168.76.2
	I0919 23:24:34.836479  309140 kubeadm.go:593] duration metric: took 26.164332ms to restartPrimaryControlPlane
	I0919 23:24:34.836489  309140 kubeadm.go:394] duration metric: took 82.255715ms to StartCluster
	I0919 23:24:34.836509  309140 settings.go:142] acquiring lock: {Name:mk1667084dcf39b2bd4c098a807e15a901d70fd5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:34.836598  309140 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:24:34.838290  309140 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/kubeconfig: {Name:mk34ab5473db765a71bfdc9f46d4118d5a4c7dc8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 23:24:34.838505  309140 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0919 23:24:34.838571  309140 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0919 23:24:34.838669  309140 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-523696"
	I0919 23:24:34.838700  309140 addons.go:238] Setting addon storage-provisioner=true in "default-k8s-diff-port-523696"
	I0919 23:24:34.838697  309140 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-523696"
	W0919 23:24:34.838713  309140 addons.go:247] addon storage-provisioner should already be in state true
	I0919 23:24:34.838723  309140 config.go:182] Loaded profile config "default-k8s-diff-port-523696": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:24:34.838737  309140 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-523696"
	I0919 23:24:34.838742  309140 host.go:66] Checking if "default-k8s-diff-port-523696" exists ...
	I0919 23:24:34.838737  309140 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-523696"
	I0919 23:24:34.838747  309140 addons.go:69] Setting dashboard=true in profile "default-k8s-diff-port-523696"
	I0919 23:24:34.838779  309140 addons.go:238] Setting addon metrics-server=true in "default-k8s-diff-port-523696"
	W0919 23:24:34.838791  309140 addons.go:247] addon metrics-server should already be in state true
	I0919 23:24:34.838792  309140 addons.go:238] Setting addon dashboard=true in "default-k8s-diff-port-523696"
	W0919 23:24:34.838802  309140 addons.go:247] addon dashboard should already be in state true
	I0919 23:24:34.838821  309140 host.go:66] Checking if "default-k8s-diff-port-523696" exists ...
	I0919 23:24:34.838843  309140 host.go:66] Checking if "default-k8s-diff-port-523696" exists ...
	I0919 23:24:34.839154  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:34.839285  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:34.839292  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:34.839314  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:34.840627  309140 out.go:179] * Verifying Kubernetes components...
	I0919 23:24:34.844063  309140 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0919 23:24:34.867790  309140 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I0919 23:24:34.869503  309140 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I0919 23:24:34.869542  309140 out.go:179]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0919 23:24:34.872008  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I0919 23:24:34.872032  309140 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I0919 23:24:34.872135  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:34.872323  309140 addons.go:435] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0919 23:24:34.872338  309140 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0919 23:24:34.872384  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:34.877981  309140 addons.go:238] Setting addon default-storageclass=true in "default-k8s-diff-port-523696"
	W0919 23:24:34.878005  309140 addons.go:247] addon default-storageclass should already be in state true
	I0919 23:24:34.878084  309140 host.go:66] Checking if "default-k8s-diff-port-523696" exists ...
	I0919 23:24:34.878558  309140 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-523696 --format={{.State.Status}}
	I0919 23:24:34.883277  309140 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0919 23:24:31.051809  257816 logs.go:123] Gathering logs for kube-controller-manager [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a] ...
	I0919 23:24:31.051845  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:31.093668  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:24:31.093705  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:24:31.153944  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:24:31.153989  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:24:31.200178  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:24:31.200206  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:24:33.798203  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:24:33.798646  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:24:33.798703  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0919 23:24:33.798750  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0919 23:24:33.839377  257816 cri.go:89] found id: "5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:33.839415  257816 cri.go:89] found id: ""
	I0919 23:24:33.839426  257816 logs.go:282] 1 containers: [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4]
	I0919 23:24:33.839490  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:33.844396  257816 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0919 23:24:33.844548  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0919 23:24:33.886550  257816 cri.go:89] found id: ""
	I0919 23:24:33.886585  257816 logs.go:282] 0 containers: []
	W0919 23:24:33.886598  257816 logs.go:284] No container was found matching "etcd"
	I0919 23:24:33.886610  257816 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0919 23:24:33.886674  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0919 23:24:33.927064  257816 cri.go:89] found id: ""
	I0919 23:24:33.927093  257816 logs.go:282] 0 containers: []
	W0919 23:24:33.927121  257816 logs.go:284] No container was found matching "coredns"
	I0919 23:24:33.927129  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0919 23:24:33.927175  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0919 23:24:33.967182  257816 cri.go:89] found id: "07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:33.967208  257816 cri.go:89] found id: ""
	I0919 23:24:33.967217  257816 logs.go:282] 1 containers: [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d]
	I0919 23:24:33.967278  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:33.971763  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0919 23:24:33.971832  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0919 23:24:34.018065  257816 cri.go:89] found id: ""
	I0919 23:24:34.018096  257816 logs.go:282] 0 containers: []
	W0919 23:24:34.018120  257816 logs.go:284] No container was found matching "kube-proxy"
	I0919 23:24:34.018127  257816 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0919 23:24:34.018187  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0919 23:24:34.058020  257816 cri.go:89] found id: "7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:34.058046  257816 cri.go:89] found id: ""
	I0919 23:24:34.058056  257816 logs.go:282] 1 containers: [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a]
	I0919 23:24:34.058138  257816 ssh_runner.go:195] Run: which crictl
	I0919 23:24:34.061911  257816 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0919 23:24:34.061974  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0919 23:24:34.101154  257816 cri.go:89] found id: ""
	I0919 23:24:34.101188  257816 logs.go:282] 0 containers: []
	W0919 23:24:34.101198  257816 logs.go:284] No container was found matching "kindnet"
	I0919 23:24:34.101206  257816 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0919 23:24:34.101254  257816 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0919 23:24:34.141152  257816 cri.go:89] found id: ""
	I0919 23:24:34.141178  257816 logs.go:282] 0 containers: []
	W0919 23:24:34.141190  257816 logs.go:284] No container was found matching "storage-provisioner"
	I0919 23:24:34.141200  257816 logs.go:123] Gathering logs for kube-scheduler [07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d] ...
	I0919 23:24:34.141214  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07d069723a38b8257dca09f083962a12f096b034def10b68979fde81d63e181d"
	I0919 23:24:34.214074  257816 logs.go:123] Gathering logs for kube-controller-manager [7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a] ...
	I0919 23:24:34.214127  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7ab0a720655c6b6bee006ae7e79e611a2de25d4160712667e8f299b4a06c936a"
	I0919 23:24:34.257490  257816 logs.go:123] Gathering logs for CRI-O ...
	I0919 23:24:34.257523  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0919 23:24:34.311837  257816 logs.go:123] Gathering logs for container status ...
	I0919 23:24:34.311886  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0919 23:24:34.365038  257816 logs.go:123] Gathering logs for kubelet ...
	I0919 23:24:34.365078  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0919 23:24:34.478214  257816 logs.go:123] Gathering logs for dmesg ...
	I0919 23:24:34.478247  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0919 23:24:34.496233  257816 logs.go:123] Gathering logs for describe nodes ...
	I0919 23:24:34.496266  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0919 23:24:34.562196  257816 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0919 23:24:34.562224  257816 logs.go:123] Gathering logs for kube-apiserver [5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4] ...
	I0919 23:24:34.562241  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5ff20035e95d88c6c97f0cf32133d14fa2bfaaeb7c560c156b0cd961d7167ab4"
	I0919 23:24:34.884891  309140 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:24:34.884919  309140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0919 23:24:34.884982  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:34.905312  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:34.906376  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:34.909340  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:34.912013  309140 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0919 23:24:34.912034  309140 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0919 23:24:34.912095  309140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-523696
	I0919 23:24:34.933811  309140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/default-k8s-diff-port-523696/id_rsa Username:docker}
	I0919 23:24:34.966480  309140 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0919 23:24:35.010612  309140 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-523696" to be "Ready" ...
	I0919 23:24:35.037805  309140 addons.go:435] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0919 23:24:35.037834  309140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0919 23:24:35.044786  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:24:35.048325  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I0919 23:24:35.048349  309140 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I0919 23:24:35.053928  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:24:35.069995  309140 addons.go:435] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0919 23:24:35.070021  309140 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0919 23:24:35.084544  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I0919 23:24:35.084571  309140 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I0919 23:24:35.109235  309140 addons.go:435] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0919 23:24:35.109262  309140 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0919 23:24:35.128547  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I0919 23:24:35.128576  309140 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I0919 23:24:35.139026  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	W0919 23:24:35.159339  309140 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:24:35.159386  309140 retry.go:31] will retry after 137.148012ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W0919 23:24:35.159866  309140 addons.go:461] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:24:35.159893  309140 retry.go:31] will retry after 373.756504ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8444/openapi/v2?timeout=32s": dial tcp [::1]:8444: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I0919 23:24:35.160185  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I0919 23:24:35.160208  309140 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I0919 23:24:35.188143  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-role.yaml
	I0919 23:24:35.188169  309140 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I0919 23:24:35.212390  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I0919 23:24:35.212417  309140 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I0919 23:24:35.233318  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I0919 23:24:35.233345  309140 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I0919 23:24:35.254082  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I0919 23:24:35.254125  309140 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I0919 23:24:35.275869  309140 addons.go:435] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:24:35.275897  309140 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I0919 23:24:35.295578  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I0919 23:24:35.296856  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0919 23:24:35.533943  309140 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I0919 23:24:36.964123  309140 node_ready.go:49] node "default-k8s-diff-port-523696" is "Ready"
	I0919 23:24:36.964156  309140 node_ready.go:38] duration metric: took 1.953481907s for node "default-k8s-diff-port-523696" to be "Ready" ...
	I0919 23:24:36.964172  309140 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:24:36.964227  309140 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:24:37.617243  309140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.478166995s)
	I0919 23:24:37.617287  309140 addons.go:479] Verifying addon metrics-server=true in "default-k8s-diff-port-523696"
	I0919 23:24:37.617386  309140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.32176172s)
	I0919 23:24:37.619530  309140 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-523696 addons enable metrics-server
	
	I0919 23:24:37.635317  309140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.338423787s)
	I0919 23:24:37.635416  309140 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: (2.101439605s)
	I0919 23:24:37.635438  309140 api_server.go:72] duration metric: took 2.796905594s to wait for apiserver process to appear ...
	I0919 23:24:37.635452  309140 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:24:37.635471  309140 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0919 23:24:37.640152  309140 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:24:37.640179  309140 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:24:37.643658  309140 out.go:179] * Enabled addons: metrics-server, dashboard, storage-provisioner, default-storageclass
	W0919 23:24:35.448264  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:37.947094  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:36.235993  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:38.734720  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	I0919 23:24:37.106633  257816 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I0919 23:24:37.107189  257816 api_server.go:269] stopped: https://192.168.94.2:8443/healthz: Get "https://192.168.94.2:8443/healthz": dial tcp 192.168.94.2:8443: connect: connection refused
	I0919 23:24:37.107301  257816 kubeadm.go:593] duration metric: took 4m5.072457044s to restartPrimaryControlPlane
	W0919 23:24:37.107369  257816 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0919 23:24:37.107399  257816 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0919 23:24:37.830984  257816 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:24:37.847638  257816 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0919 23:24:37.860362  257816 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0919 23:24:37.860561  257816 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0919 23:24:37.875827  257816 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0919 23:24:37.875851  257816 kubeadm.go:157] found existing configuration files:
	
	I0919 23:24:37.875901  257816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0919 23:24:37.887979  257816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0919 23:24:37.888047  257816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0919 23:24:37.901292  257816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0919 23:24:37.913648  257816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0919 23:24:37.913697  257816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0919 23:24:37.924817  257816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0919 23:24:37.935433  257816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0919 23:24:37.935507  257816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0919 23:24:37.946943  257816 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0919 23:24:37.960332  257816 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0919 23:24:37.960401  257816 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0919 23:24:37.973866  257816 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.34.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0919 23:24:38.042696  257816 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1037-gcp\n", err: exit status 1
	I0919 23:24:38.113772  257816 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0919 23:24:37.645231  309140 addons.go:514] duration metric: took 2.806657655s for enable addons: enabled=[metrics-server dashboard storage-provisioner default-storageclass]
	I0919 23:24:38.136257  309140 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0919 23:24:38.141468  309140 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0919 23:24:38.141497  309140 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0919 23:24:38.636017  309140 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I0919 23:24:38.640415  309140 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I0919 23:24:38.641501  309140 api_server.go:141] control plane version: v1.34.0
	I0919 23:24:38.641539  309140 api_server.go:131] duration metric: took 1.006078886s to wait for apiserver health ...
	I0919 23:24:38.641550  309140 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:24:38.645492  309140 system_pods.go:59] 9 kube-system pods found
	I0919 23:24:38.645534  309140 system_pods.go:61] "coredns-66bc5c9577-zjjk2" [403d55a0-6e25-4177-9a59-c6ea5792f38e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:24:38.645545  309140 system_pods.go:61] "etcd-default-k8s-diff-port-523696" [66d51094-8ff7-4164-9c50-41bac13011c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:24:38.645558  309140 system_pods.go:61] "kindnet-fkhtz" [8d0ba255-999f-4997-971c-6f4501b5a3c3] Running
	I0919 23:24:38.645570  309140 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-523696" [f312d723-9344-4643-8baf-fe8c06960175] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:24:38.645579  309140 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-523696" [fc4102ec-4d70-4cd9-9296-9cf081d83722] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:24:38.645585  309140 system_pods.go:61] "kube-proxy-wfzfz" [f616d499-194a-4158-b1f6-c5850de50d2c] Running
	I0919 23:24:38.645593  309140 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-523696" [f109a232-83f4-49bc-b3b7-0f8a300a5715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:24:38.645600  309140 system_pods.go:61] "metrics-server-746fcd58dc-7lhll" [a52407fb-edc9-43bb-a659-054943380e3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:24:38.645605  309140 system_pods.go:61] "storage-provisioner" [4cc2a373-2f09-4f25-aebf-185a99197c9e] Running
	I0919 23:24:38.645613  309140 system_pods.go:74] duration metric: took 4.056969ms to wait for pod list to return data ...
	I0919 23:24:38.645627  309140 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:24:38.648705  309140 default_sa.go:45] found service account: "default"
	I0919 23:24:38.648727  309140 default_sa.go:55] duration metric: took 3.094985ms for default service account to be created ...
	I0919 23:24:38.648737  309140 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:24:38.653875  309140 system_pods.go:86] 9 kube-system pods found
	I0919 23:24:38.653910  309140 system_pods.go:89] "coredns-66bc5c9577-zjjk2" [403d55a0-6e25-4177-9a59-c6ea5792f38e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:24:38.653920  309140 system_pods.go:89] "etcd-default-k8s-diff-port-523696" [66d51094-8ff7-4164-9c50-41bac13011c1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0919 23:24:38.653925  309140 system_pods.go:89] "kindnet-fkhtz" [8d0ba255-999f-4997-971c-6f4501b5a3c3] Running
	I0919 23:24:38.653931  309140 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-523696" [f312d723-9344-4643-8baf-fe8c06960175] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0919 23:24:38.653937  309140 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-523696" [fc4102ec-4d70-4cd9-9296-9cf081d83722] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0919 23:24:38.653942  309140 system_pods.go:89] "kube-proxy-wfzfz" [f616d499-194a-4158-b1f6-c5850de50d2c] Running
	I0919 23:24:38.653946  309140 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-523696" [f109a232-83f4-49bc-b3b7-0f8a300a5715] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0919 23:24:38.653951  309140 system_pods.go:89] "metrics-server-746fcd58dc-7lhll" [a52407fb-edc9-43bb-a659-054943380e3c] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0919 23:24:38.653955  309140 system_pods.go:89] "storage-provisioner" [4cc2a373-2f09-4f25-aebf-185a99197c9e] Running
	I0919 23:24:38.653961  309140 system_pods.go:126] duration metric: took 5.219896ms to wait for k8s-apps to be running ...
	I0919 23:24:38.653968  309140 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:24:38.654008  309140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:24:38.667026  309140 system_svc.go:56] duration metric: took 13.050178ms WaitForService to wait for kubelet
	I0919 23:24:38.667055  309140 kubeadm.go:578] duration metric: took 3.828523562s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:24:38.667079  309140 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:24:38.669937  309140 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:24:38.669966  309140 node_conditions.go:123] node cpu capacity is 8
	I0919 23:24:38.669982  309140 node_conditions.go:105] duration metric: took 2.897169ms to run NodePressure ...
	I0919 23:24:38.669995  309140 start.go:241] waiting for startup goroutines ...
	I0919 23:24:38.670005  309140 start.go:246] waiting for cluster config update ...
	I0919 23:24:38.670023  309140 start.go:255] writing updated cluster config ...
	I0919 23:24:38.670401  309140 ssh_runner.go:195] Run: rm -f paused
	I0919 23:24:38.674248  309140 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:24:38.678720  309140 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zjjk2" in "kube-system" namespace to be "Ready" or be gone ...
	W0919 23:24:40.684410  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:39.947267  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:41.947347  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:40.736012  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:43.235527  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:42.685295  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:45.185381  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:44.447349  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:46.946250  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:45.236721  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:47.734999  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:49.736720  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:47.684720  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:50.184176  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:48.950026  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:51.447161  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:52.235290  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:54.235773  303572 pod_ready.go:104] pod "coredns-66bc5c9577-zwdn4" is not "Ready", error: <nil>
	W0919 23:24:52.185812  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:54.684217  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	I0919 23:24:56.735208  303572 pod_ready.go:94] pod "coredns-66bc5c9577-zwdn4" is "Ready"
	I0919 23:24:56.735234  303572 pod_ready.go:86] duration metric: took 36.005668237s for pod "coredns-66bc5c9577-zwdn4" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.738054  303572 pod_ready.go:83] waiting for pod "etcd-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.742341  303572 pod_ready.go:94] pod "etcd-embed-certs-756077" is "Ready"
	I0919 23:24:56.742366  303572 pod_ready.go:86] duration metric: took 4.270316ms for pod "etcd-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.744379  303572 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.748089  303572 pod_ready.go:94] pod "kube-apiserver-embed-certs-756077" is "Ready"
	I0919 23:24:56.748134  303572 pod_ready.go:86] duration metric: took 3.733649ms for pod "kube-apiserver-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.749913  303572 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:56.934219  303572 pod_ready.go:94] pod "kube-controller-manager-embed-certs-756077" is "Ready"
	I0919 23:24:56.934252  303572 pod_ready.go:86] duration metric: took 184.319914ms for pod "kube-controller-manager-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:57.133339  303572 pod_ready.go:83] waiting for pod "kube-proxy-225f8" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:57.533969  303572 pod_ready.go:94] pod "kube-proxy-225f8" is "Ready"
	I0919 23:24:57.534005  303572 pod_ready.go:86] duration metric: took 400.632976ms for pod "kube-proxy-225f8" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:57.733589  303572 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:58.133832  303572 pod_ready.go:94] pod "kube-scheduler-embed-certs-756077" is "Ready"
	I0919 23:24:58.133856  303572 pod_ready.go:86] duration metric: took 400.242784ms for pod "kube-scheduler-embed-certs-756077" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:24:58.133867  303572 pod_ready.go:40] duration metric: took 37.408436087s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:24:58.179868  303572 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:24:58.183177  303572 out.go:179] * Done! kubectl is now configured to use "embed-certs-756077" cluster and "default" namespace by default
	W0919 23:24:53.947140  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:56.447077  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:58.447147  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:24:56.684835  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:24:59.184368  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:25:00.947033  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:25:02.947241  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:25:01.684242  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:25:04.184456  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:25:05.447051  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:25:07.447339  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	W0919 23:25:06.684449  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:25:09.184186  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:25:11.184293  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	W0919 23:25:09.947008  302093 node_ready.go:57] node "auto-781969" has "Ready":"False" status (will retry)
	I0919 23:25:11.946861  302093 node_ready.go:49] node "auto-781969" is "Ready"
	I0919 23:25:11.946895  302093 node_ready.go:38] duration metric: took 41.003652318s for node "auto-781969" to be "Ready" ...
	I0919 23:25:11.946911  302093 api_server.go:52] waiting for apiserver process to appear ...
	I0919 23:25:11.946965  302093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:25:11.961382  302093 api_server.go:72] duration metric: took 41.40080899s to wait for apiserver process to appear ...
	I0919 23:25:11.961409  302093 api_server.go:88] waiting for apiserver healthz status ...
	I0919 23:25:11.961428  302093 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I0919 23:25:11.968237  302093 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I0919 23:25:11.969460  302093 api_server.go:141] control plane version: v1.34.0
	I0919 23:25:11.969485  302093 api_server.go:131] duration metric: took 8.06849ms to wait for apiserver health ...
	I0919 23:25:11.969495  302093 system_pods.go:43] waiting for kube-system pods to appear ...
	I0919 23:25:11.972947  302093 system_pods.go:59] 8 kube-system pods found
	I0919 23:25:11.972982  302093 system_pods.go:61] "coredns-66bc5c9577-fjpkt" [7e1974ee-e376-4833-9048-156895d0db1d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:25:11.972988  302093 system_pods.go:61] "etcd-auto-781969" [a9bbb03e-97ea-4c5a-b2e4-99e339497eeb] Running
	I0919 23:25:11.972993  302093 system_pods.go:61] "kindnet-7tkl5" [a201e6e4-881c-412b-a80d-a160f8ff2864] Running
	I0919 23:25:11.972999  302093 system_pods.go:61] "kube-apiserver-auto-781969" [da5fb4b3-db5f-4787-9029-da5debbd1ccf] Running
	I0919 23:25:11.973002  302093 system_pods.go:61] "kube-controller-manager-auto-781969" [67042ac0-74bf-4d5f-93bf-a1572e239ce5] Running
	I0919 23:25:11.973006  302093 system_pods.go:61] "kube-proxy-sjffg" [b32b4460-af44-4d28-8bcd-c1085fdf15a1] Running
	I0919 23:25:11.973010  302093 system_pods.go:61] "kube-scheduler-auto-781969" [a777c324-8f7d-4bdc-a414-5948fa742cf0] Running
	I0919 23:25:11.973016  302093 system_pods.go:61] "storage-provisioner" [7e64e54c-8959-44fc-ad7e-836b761cfe0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:25:11.973026  302093 system_pods.go:74] duration metric: took 3.523806ms to wait for pod list to return data ...
	I0919 23:25:11.973039  302093 default_sa.go:34] waiting for default service account to be created ...
	I0919 23:25:11.976034  302093 default_sa.go:45] found service account: "default"
	I0919 23:25:11.976060  302093 default_sa.go:55] duration metric: took 3.013942ms for default service account to be created ...
	I0919 23:25:11.976073  302093 system_pods.go:116] waiting for k8s-apps to be running ...
	I0919 23:25:11.980001  302093 system_pods.go:86] 8 kube-system pods found
	I0919 23:25:11.980040  302093 system_pods.go:89] "coredns-66bc5c9577-fjpkt" [7e1974ee-e376-4833-9048-156895d0db1d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:25:11.980050  302093 system_pods.go:89] "etcd-auto-781969" [a9bbb03e-97ea-4c5a-b2e4-99e339497eeb] Running
	I0919 23:25:11.980058  302093 system_pods.go:89] "kindnet-7tkl5" [a201e6e4-881c-412b-a80d-a160f8ff2864] Running
	I0919 23:25:11.980063  302093 system_pods.go:89] "kube-apiserver-auto-781969" [da5fb4b3-db5f-4787-9029-da5debbd1ccf] Running
	I0919 23:25:11.980068  302093 system_pods.go:89] "kube-controller-manager-auto-781969" [67042ac0-74bf-4d5f-93bf-a1572e239ce5] Running
	I0919 23:25:11.980073  302093 system_pods.go:89] "kube-proxy-sjffg" [b32b4460-af44-4d28-8bcd-c1085fdf15a1] Running
	I0919 23:25:11.980080  302093 system_pods.go:89] "kube-scheduler-auto-781969" [a777c324-8f7d-4bdc-a414-5948fa742cf0] Running
	I0919 23:25:11.980092  302093 system_pods.go:89] "storage-provisioner" [7e64e54c-8959-44fc-ad7e-836b761cfe0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:25:11.980184  302093 retry.go:31] will retry after 280.304463ms: missing components: kube-dns
	I0919 23:25:12.267546  302093 system_pods.go:86] 8 kube-system pods found
	I0919 23:25:12.267591  302093 system_pods.go:89] "coredns-66bc5c9577-fjpkt" [7e1974ee-e376-4833-9048-156895d0db1d] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0919 23:25:12.267602  302093 system_pods.go:89] "etcd-auto-781969" [a9bbb03e-97ea-4c5a-b2e4-99e339497eeb] Running
	I0919 23:25:12.267612  302093 system_pods.go:89] "kindnet-7tkl5" [a201e6e4-881c-412b-a80d-a160f8ff2864] Running
	I0919 23:25:12.267622  302093 system_pods.go:89] "kube-apiserver-auto-781969" [da5fb4b3-db5f-4787-9029-da5debbd1ccf] Running
	I0919 23:25:12.267628  302093 system_pods.go:89] "kube-controller-manager-auto-781969" [67042ac0-74bf-4d5f-93bf-a1572e239ce5] Running
	I0919 23:25:12.267634  302093 system_pods.go:89] "kube-proxy-sjffg" [b32b4460-af44-4d28-8bcd-c1085fdf15a1] Running
	I0919 23:25:12.267639  302093 system_pods.go:89] "kube-scheduler-auto-781969" [a777c324-8f7d-4bdc-a414-5948fa742cf0] Running
	I0919 23:25:12.267646  302093 system_pods.go:89] "storage-provisioner" [7e64e54c-8959-44fc-ad7e-836b761cfe0e] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0919 23:25:12.267667  302093 retry.go:31] will retry after 352.045374ms: missing components: kube-dns
	I0919 23:25:12.625920  302093 system_pods.go:86] 8 kube-system pods found
	I0919 23:25:12.625959  302093 system_pods.go:89] "coredns-66bc5c9577-fjpkt" [7e1974ee-e376-4833-9048-156895d0db1d] Running
	I0919 23:25:12.625967  302093 system_pods.go:89] "etcd-auto-781969" [a9bbb03e-97ea-4c5a-b2e4-99e339497eeb] Running
	I0919 23:25:12.625972  302093 system_pods.go:89] "kindnet-7tkl5" [a201e6e4-881c-412b-a80d-a160f8ff2864] Running
	I0919 23:25:12.625982  302093 system_pods.go:89] "kube-apiserver-auto-781969" [da5fb4b3-db5f-4787-9029-da5debbd1ccf] Running
	I0919 23:25:12.625988  302093 system_pods.go:89] "kube-controller-manager-auto-781969" [67042ac0-74bf-4d5f-93bf-a1572e239ce5] Running
	I0919 23:25:12.625994  302093 system_pods.go:89] "kube-proxy-sjffg" [b32b4460-af44-4d28-8bcd-c1085fdf15a1] Running
	I0919 23:25:12.625999  302093 system_pods.go:89] "kube-scheduler-auto-781969" [a777c324-8f7d-4bdc-a414-5948fa742cf0] Running
	I0919 23:25:12.626004  302093 system_pods.go:89] "storage-provisioner" [7e64e54c-8959-44fc-ad7e-836b761cfe0e] Running
	I0919 23:25:12.626013  302093 system_pods.go:126] duration metric: took 649.934166ms to wait for k8s-apps to be running ...
	I0919 23:25:12.626023  302093 system_svc.go:44] waiting for kubelet service to be running ....
	I0919 23:25:12.626076  302093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:25:12.644516  302093 system_svc.go:56] duration metric: took 18.481348ms WaitForService to wait for kubelet
	I0919 23:25:12.644547  302093 kubeadm.go:578] duration metric: took 42.08397673s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0919 23:25:12.644652  302093 node_conditions.go:102] verifying NodePressure condition ...
	I0919 23:25:12.649381  302093 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0919 23:25:12.649418  302093 node_conditions.go:123] node cpu capacity is 8
	I0919 23:25:12.649433  302093 node_conditions.go:105] duration metric: took 4.774533ms to run NodePressure ...
	I0919 23:25:12.649449  302093 start.go:241] waiting for startup goroutines ...
	I0919 23:25:12.649460  302093 start.go:246] waiting for cluster config update ...
	I0919 23:25:12.649474  302093 start.go:255] writing updated cluster config ...
	I0919 23:25:12.649824  302093 ssh_runner.go:195] Run: rm -f paused
	I0919 23:25:12.655470  302093 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:25:12.660757  302093 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-fjpkt" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:12.669723  302093 pod_ready.go:94] pod "coredns-66bc5c9577-fjpkt" is "Ready"
	I0919 23:25:12.669759  302093 pod_ready.go:86] duration metric: took 8.973917ms for pod "coredns-66bc5c9577-fjpkt" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:12.673670  302093 pod_ready.go:83] waiting for pod "etcd-auto-781969" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:12.681339  302093 pod_ready.go:94] pod "etcd-auto-781969" is "Ready"
	I0919 23:25:12.681367  302093 pod_ready.go:86] duration metric: took 7.672666ms for pod "etcd-auto-781969" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:12.684630  302093 pod_ready.go:83] waiting for pod "kube-apiserver-auto-781969" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:12.690803  302093 pod_ready.go:94] pod "kube-apiserver-auto-781969" is "Ready"
	I0919 23:25:12.690832  302093 pod_ready.go:86] duration metric: took 6.098448ms for pod "kube-apiserver-auto-781969" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:12.693974  302093 pod_ready.go:83] waiting for pod "kube-controller-manager-auto-781969" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:13.064344  302093 pod_ready.go:94] pod "kube-controller-manager-auto-781969" is "Ready"
	I0919 23:25:13.064375  302093 pod_ready.go:86] duration metric: took 370.374465ms for pod "kube-controller-manager-auto-781969" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:13.264939  302093 pod_ready.go:83] waiting for pod "kube-proxy-sjffg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:13.660631  302093 pod_ready.go:94] pod "kube-proxy-sjffg" is "Ready"
	I0919 23:25:13.660661  302093 pod_ready.go:86] duration metric: took 395.687476ms for pod "kube-proxy-sjffg" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:13.860228  302093 pod_ready.go:83] waiting for pod "kube-scheduler-auto-781969" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:14.262060  302093 pod_ready.go:94] pod "kube-scheduler-auto-781969" is "Ready"
	I0919 23:25:14.262093  302093 pod_ready.go:86] duration metric: took 401.840378ms for pod "kube-scheduler-auto-781969" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:14.262117  302093 pod_ready.go:40] duration metric: took 1.606609998s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:25:14.324742  302093 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:25:14.326651  302093 out.go:179] * Done! kubectl is now configured to use "auto-781969" cluster and "default" namespace by default
	W0919 23:25:13.686372  309140 pod_ready.go:104] pod "coredns-66bc5c9577-zjjk2" is not "Ready", error: <nil>
	I0919 23:25:14.684531  309140 pod_ready.go:94] pod "coredns-66bc5c9577-zjjk2" is "Ready"
	I0919 23:25:14.684563  309140 pod_ready.go:86] duration metric: took 36.005815716s for pod "coredns-66bc5c9577-zjjk2" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:14.687402  309140 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-523696" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:14.693400  309140 pod_ready.go:94] pod "etcd-default-k8s-diff-port-523696" is "Ready"
	I0919 23:25:14.693429  309140 pod_ready.go:86] duration metric: took 6.002339ms for pod "etcd-default-k8s-diff-port-523696" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:14.696333  309140 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-523696" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:14.702765  309140 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-523696" is "Ready"
	I0919 23:25:14.702800  309140 pod_ready.go:86] duration metric: took 6.441029ms for pod "kube-apiserver-default-k8s-diff-port-523696" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:14.706850  309140 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-523696" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:14.883150  309140 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-523696" is "Ready"
	I0919 23:25:14.883181  309140 pod_ready.go:86] duration metric: took 176.299821ms for pod "kube-controller-manager-default-k8s-diff-port-523696" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:15.082407  309140 pod_ready.go:83] waiting for pod "kube-proxy-wfzfz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:15.482518  309140 pod_ready.go:94] pod "kube-proxy-wfzfz" is "Ready"
	I0919 23:25:15.482540  309140 pod_ready.go:86] duration metric: took 400.106694ms for pod "kube-proxy-wfzfz" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:15.683025  309140 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-523696" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:16.087400  309140 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-523696" is "Ready"
	I0919 23:25:16.087445  309140 pod_ready.go:86] duration metric: took 404.391028ms for pod "kube-scheduler-default-k8s-diff-port-523696" in "kube-system" namespace to be "Ready" or be gone ...
	I0919 23:25:16.087462  309140 pod_ready.go:40] duration metric: took 37.413170737s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I0919 23:25:16.160919  309140 start.go:617] kubectl: 1.34.1, cluster: 1.34.0 (minor skew: 0)
	I0919 23:25:16.193812  309140 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-523696" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.720819578Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e3cb7365-9523-419e-8fa9-3935e3a1422b name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.721551443Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=ac6c9983-6bba-46b2-92bc-1e3ca3c439aa name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.721767569Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,RepoTags:[gcr.io/k8s-minikube/storage-provisioner:v5],RepoDigests:[gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944 gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f],Size_:31470524,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=ac6c9983-6bba-46b2-92bc-1e3ca3c439aa name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.722691428Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=10e0ff66-974b-4b33-828e-981a0d1991eb name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.722797837Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.736012044Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6e469719d47a6fa21be510ae3bed856c58e4b67d7302b45f8695fee54aa49c31/merged/etc/passwd: no such file or directory"
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.736063295Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6e469719d47a6fa21be510ae3bed856c58e4b67d7302b45f8695fee54aa49c31/merged/etc/group: no such file or directory"
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.790328152Z" level=info msg="Created container 945fba0cfc75cfa72d89c811e40790826c3fad7b17490d6249775590f6f464fe: kube-system/storage-provisioner/storage-provisioner" id=10e0ff66-974b-4b33-828e-981a0d1991eb name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.790983635Z" level=info msg="Starting container: 945fba0cfc75cfa72d89c811e40790826c3fad7b17490d6249775590f6f464fe" id=00e284c4-1cb0-41af-85ee-e3869dda74fa name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 23:24:51 embed-certs-756077 crio[561]: time="2025-09-19 23:24:51.798908220Z" level=info msg="Started container" PID=2160 containerID=945fba0cfc75cfa72d89c811e40790826c3fad7b17490d6249775590f6f464fe description=kube-system/storage-provisioner/storage-provisioner id=00e284c4-1cb0-41af-85ee-e3869dda74fa name=/runtime.v1.RuntimeService/StartContainer sandboxID=95c170e4441a33a98f7980e958245ad2e613c850b6167efde40601c94882a197
	Sep 19 23:24:59 embed-certs-756077 crio[561]: time="2025-09-19 23:24:59.599170540Z" level=info msg="Checking image status: fake.domain/registry.k8s.io/echoserver:1.4" id=53f1fa4f-679f-4154-a9a1-1773d7555cae name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:24:59 embed-certs-756077 crio[561]: time="2025-09-19 23:24:59.599499158Z" level=info msg="Image fake.domain/registry.k8s.io/echoserver:1.4 not found" id=53f1fa4f-679f-4154-a9a1-1773d7555cae name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:24:59 embed-certs-756077 crio[561]: time="2025-09-19 23:24:59.600212282Z" level=info msg="Pulling image: fake.domain/registry.k8s.io/echoserver:1.4" id=e0af031d-cbe7-4f66-9c00-4a56b9ddf807 name=/runtime.v1.ImageService/PullImage
	Sep 19 23:24:59 embed-certs-756077 crio[561]: time="2025-09-19 23:24:59.643604669Z" level=info msg="Trying to access \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.599533375Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=c94346c1-8ad8-4253-9c34-9959b3e1c8ee name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.599800018Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=c94346c1-8ad8-4253-9c34-9959b3e1c8ee name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.600688210Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=6b37900d-0ac7-476f-a9d0-e0de68d01921 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.600895366Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7,RepoTags:[registry.k8s.io/echoserver:1.4],RepoDigests:[registry.k8s.io/echoserver@sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb],Size_:145080634,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=6b37900d-0ac7-476f-a9d0-e0de68d01921 name=/runtime.v1.ImageService/ImageStatus
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.601805224Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vxck9/dashboard-metrics-scraper" id=5fac2ab7-5478-4c30-8f60-83ffff473611 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.601918034Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.672079210Z" level=info msg="Created container 689221032a2630c7c58af4630342739f139f729536274c5f953ebd04d737ca46: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vxck9/dashboard-metrics-scraper" id=5fac2ab7-5478-4c30-8f60-83ffff473611 name=/runtime.v1.RuntimeService/CreateContainer
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.672839914Z" level=info msg="Starting container: 689221032a2630c7c58af4630342739f139f729536274c5f953ebd04d737ca46" id=017e16b5-9d54-42c1-af28-93e8e9d7f784 name=/runtime.v1.RuntimeService/StartContainer
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.681038890Z" level=info msg="Started container" PID=2223 containerID=689221032a2630c7c58af4630342739f139f729536274c5f953ebd04d737ca46 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vxck9/dashboard-metrics-scraper id=017e16b5-9d54-42c1-af28-93e8e9d7f784 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a8ab8fb06936b78a7eaec2eba829e57c5ced1c28b0b82382dd0441939029f5e5
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.750031261Z" level=info msg="Removing container: 5b172bbc722045512a0cefe25f6a1fd3865401c873fbe58f7625735533618847" id=af61a01b-775f-434f-9ba8-b87c58297e28 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 19 23:25:02 embed-certs-756077 crio[561]: time="2025-09-19 23:25:02.768297610Z" level=info msg="Removed container 5b172bbc722045512a0cefe25f6a1fd3865401c873fbe58f7625735533618847: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vxck9/dashboard-metrics-scraper" id=af61a01b-775f-434f-9ba8-b87c58297e28 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	689221032a263       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           14 seconds ago       Exited              dashboard-metrics-scraper   3                   a8ab8fb06936b       dashboard-metrics-scraper-6ffb444bf9-vxck9
	945fba0cfc75c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago       Running             storage-provisioner         2                   95c170e4441a3       storage-provisioner
	e051b85cf0394       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93   47 seconds ago       Running             kubernetes-dashboard        0                   aff0cc0db2f0e       kubernetes-dashboard-855c9754f9-x74ct
	f4e22cd5306b9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago       Running             coredns                     1                   e30a38036cea8       coredns-66bc5c9577-zwdn4
	3fe3e5ea53116       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago       Running             busybox                     1                   7318abea2bab3       busybox
	70b10a10b2d7f       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago       Running             kindnet-cni                 1                   95cdab2360ac1       kindnet-ts4kx
	1efebf99a4067       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago       Exited              storage-provisioner         1                   95c170e4441a3       storage-provisioner
	de878a7c62a14       df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce                                           56 seconds ago       Running             kube-proxy                  1                   433e1f76cea2c       kube-proxy-225f8
	cd44e5b995012       a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634                                           About a minute ago   Running             kube-controller-manager     1                   d4584889067e1       kube-controller-manager-embed-certs-756077
	9032dfc131c66       90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90                                           About a minute ago   Running             kube-apiserver              1                   886ff72149aed       kube-apiserver-embed-certs-756077
	f2e79ffd44d93       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        1                   afa1b0c0ea0ce       etcd-embed-certs-756077
	480cfa6126d62       46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc                                           About a minute ago   Running             kube-scheduler              1                   e53a586b1874d       kube-scheduler-embed-certs-756077
	
	
	==> coredns [f4e22cd5306b904f82c2ca26c269a044b12f70ed14c1edbb5303068498f73b82] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50886 - 6299 "HINFO IN 614489967846936490.7588539253947482831. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.015410981s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-756077
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-756077
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=6e37ee63f758843bb5fe33c3a528c564c4b83d53
	                    minikube.k8s.io/name=embed-certs-756077
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_09_19T23_22_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Sep 2025 23:22:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-756077
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Sep 2025 23:25:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Sep 2025 23:24:50 +0000   Fri, 19 Sep 2025 23:22:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Sep 2025 23:24:50 +0000   Fri, 19 Sep 2025 23:22:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Sep 2025 23:24:50 +0000   Fri, 19 Sep 2025 23:22:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Sep 2025 23:24:50 +0000   Fri, 19 Sep 2025 23:23:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    embed-certs-756077
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863460Ki
	  pods:               110
	System Info:
	  Machine ID:                 bc8130e1399c4068bb9500ef3a03b5ff
	  System UUID:                31f9854e-9bf9-4035-8a48-847a7779033e
	  Boot ID:                    817dec6f-7f4f-4d7c-9b86-04ef5a5257ac
	  Kernel Version:             6.8.0-1037-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.34.0
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-66bc5c9577-zwdn4                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m22s
	  kube-system                 etcd-embed-certs-756077                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m28s
	  kube-system                 kindnet-ts4kx                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m22s
	  kube-system                 kube-apiserver-embed-certs-756077             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-controller-manager-embed-certs-756077    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 kube-proxy-225f8                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kube-system                 kube-scheduler-embed-certs-756077             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m28s
	  kube-system                 metrics-server-746fcd58dc-8gz2l               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         86s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vxck9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-x74ct         0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m20s              kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasNoDiskPressure    2m28s              kubelet          Node embed-certs-756077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m28s              kubelet          Node embed-certs-756077 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m28s              kubelet          Node embed-certs-756077 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m28s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m23s              node-controller  Node embed-certs-756077 event: Registered Node embed-certs-756077 in Controller
	  Normal  NodeReady                101s               kubelet          Node embed-certs-756077 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node embed-certs-756077 status is now: NodeHasSufficientMemory
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node embed-certs-756077 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node embed-certs-756077 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node embed-certs-756077 event: Registered Node embed-certs-756077 in Controller
	  Normal  Starting                 4s                 kubelet          Starting kubelet.
	  Normal  Starting                 3s                 kubelet          Starting kubelet.
	  Normal  Starting                 2s                 kubelet          Starting kubelet.
	
	
	==> dmesg <==
	[  +2.048805] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +4.030683] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[  +8.319260] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000037] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 22:18] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[ +32.253162] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: fa c1 c1 fb 77 b0 aa f2 cb 15 5a 82 08 00
	[Sep19 23:21] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +2.000740] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.000000] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999317] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.501476] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.499982] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999149] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.001177] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.997827] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.502489] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.499017] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.999122] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.003267] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.996866] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	[  +0.503800] IPv4: martian destination 127.0.0.11 from 10.244.0.3, dev veth09f172fb
	
	
	==> etcd [f2e79ffd44d93ee3fe15b2873aba29bdfbf793ab2a6b29508fc474c82504d5f2] <==
	{"level":"warn","ts":"2025-09-19T23:24:18.379291Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.385738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.393855Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.400803Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42138","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.408599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42154","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.415027Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.421521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42184","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.429046Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42206","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.436231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.443925Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42232","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.453339Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42256","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.462191Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42268","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.468870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.476012Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42316","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.484757Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42330","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.492520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42352","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.500250Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.507123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42406","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.515170Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.523067Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42438","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.535780Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42458","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.543702Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42474","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.552386Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42490","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-09-19T23:24:18.605818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:42508","server-name":"","error":"EOF"}
	2025/09/19 23:25:15 WARNING: [core] [Server #6]grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	
	
	==> kernel <==
	 23:25:17 up  2:07,  0 users,  load average: 3.91, 3.27, 2.17
	Linux embed-certs-756077 6.8.0-1037-gcp #39~22.04.1-Ubuntu SMP Thu Aug 21 17:29:24 UTC 2025 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [70b10a10b2d7f763143bec483cc6467ce648f9daec7bf3b72a3ffefdcac7b0b4] <==
	I0919 23:24:21.365206       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I0919 23:24:21.365446       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I0919 23:24:21.365609       1 main.go:148] setting mtu 1500 for CNI 
	I0919 23:24:21.365628       1 main.go:178] kindnetd IP family: "ipv4"
	I0919 23:24:21.365652       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-09-19T23:24:21Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I0919 23:24:21.568361       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I0919 23:24:21.568392       1 controller.go:381] "Waiting for informer caches to sync"
	I0919 23:24:21.568406       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I0919 23:24:21.568546       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I0919 23:24:21.869496       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I0919 23:24:21.869526       1 metrics.go:72] Registering metrics
	I0919 23:24:21.869598       1 controller.go:711] "Syncing nftables rules"
	I0919 23:24:31.568760       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:24:31.568797       1 main.go:301] handling current node
	I0919 23:24:41.571183       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:24:41.571244       1 main.go:301] handling current node
	I0919 23:24:51.568195       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:24:51.568226       1 main.go:301] handling current node
	I0919 23:25:01.568229       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:25:01.568267       1 main.go:301] handling current node
	I0919 23:25:11.922425       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I0919 23:25:11.922464       1 main.go:301] handling current node
	
	
	==> kube-apiserver [9032dfc131c662ca6ba1220fa3726c4894f3d6745e84069ce021f410ca21d2f8] <==
	E0919 23:25:15.863551       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:15.863671       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.671815ms" method="GET" path="/apis/storage.k8s.io/v1/csinodes/embed-certs-756077" result=null
	E0919 23:25:15.863755       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="6.682695ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	E0919 23:25:15.863837       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.867412ms" method="GET" path="/api/v1/services" result=null
	E0919 23:25:15.867209       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:15.867405       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="7.410614ms" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-756077" result=null
	E0919 23:25:16.815973       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: client disconnected" logger="UnhandledError"
	E0919 23:25:16.816014       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"client disconnected\"}: client disconnected" logger="UnhandledError"
	E0919 23:25:16.816057       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/api/v1/nodes/embed-certs-756077" auditID="8666442c-8703-43f6-b69a-63d6ae3400a7"
	E0919 23:25:16.816091       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="GET" URI="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-756077?timeout=10s" auditID="513305e4-b854-4e32-a967-3107d4f25554"
	E0919 23:25:16.816123       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="2.982µs" method="GET" path="/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/embed-certs-756077" result=null
	E0919 23:25:16.816587       1 writers.go:117] "Unhandled Error" err="apiserver was unable to close cleanly the response writer: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:16.816799       1 writers.go:117] "Unhandled Error" err="apiserver was unable to close cleanly the response writer: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:16.817564       1 writers.go:123] "Unhandled Error" err="apiserver was unable to write a JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:16.817684       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.494019ms" method="GET" path="/apis/storage.k8s.io/v1/csidrivers" result=null
	E0919 23:25:16.817730       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:16.817831       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="1.691696ms" method="GET" path="/api/v1/services" result=null
	E0919 23:25:16.817861       1 status.go:71] "Unhandled Error" err="apiserver received an error that is not an metav1.Status: &errors.errorString{s:\"http: Handler timeout\"}: http: Handler timeout" logger="UnhandledError"
	{"level":"warn","ts":"2025-09-19T23:25:16.818056Z","logger":"etcd-client","caller":"v3@v3.6.4/retry_interceptor.go:65","msg":"retrying of unary invoker failed","target":"etcd-endpoints://0xc0006bba40/127.0.0.1:2379","method":"/etcdserverpb.KV/Txn","attempt":0,"error":"rpc error: code = Canceled desc = context canceled"}
	E0919 23:25:16.818202       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 486.454µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0919 23:25:16.819202       1 writers.go:136] "Unhandled Error" err="apiserver was unable to write a fallback JSON response: http: Handler timeout" logger="UnhandledError"
	E0919 23:25:16.819298       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.222068ms" method="GET" path="/api/v1/nodes/embed-certs-756077" result=null
	E0919 23:25:16.819323       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="3.036075ms" method="POST" path="/api/v1/namespaces/default/events" result=null
	E0919 23:25:17.596958       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="POST" URI="/api/v1/namespaces/default/events" auditID="b1bd4875-7734-4722-9026-1c560ed9db79"
	E0919 23:25:17.597297       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="4.21µs" method="POST" path="/api/v1/namespaces/default/events" result=null
	
	
	==> kube-controller-manager [cd44e5b99501296d31ff10da499f8768c2e51e6a3a00597904da146bba3de464] <==
	I0919 23:24:22.443613       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I0919 23:24:22.443627       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I0919 23:24:22.443660       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I0919 23:24:22.443687       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I0919 23:24:22.443795       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I0919 23:24:22.444379       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I0919 23:24:22.444405       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I0919 23:24:22.444848       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I0919 23:24:22.445939       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I0919 23:24:22.446016       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I0919 23:24:22.446130       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I0919 23:24:22.446187       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I0919 23:24:22.450450       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I0919 23:24:22.455804       1 shared_informer.go:356] "Caches are synced" controller="job"
	I0919 23:24:22.458976       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I0919 23:24:22.465148       1 shared_informer.go:356] "Caches are synced" controller="node"
	I0919 23:24:22.465231       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0919 23:24:22.465272       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0919 23:24:22.465278       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I0919 23:24:22.465283       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I0919 23:24:22.472311       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I0919 23:24:22.476565       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I0919 23:24:22.478690       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	E0919 23:24:52.466183       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I0919 23:24:52.483648       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	
	
	==> kube-proxy [de878a7c62a1413a02b59a1081c2a518f93e4c981f7a27afc1c7fd20e5770499] <==
	I0919 23:24:21.169802       1 server_linux.go:53] "Using iptables proxy"
	I0919 23:24:21.234791       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I0919 23:24:21.334985       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I0919 23:24:21.335031       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E0919 23:24:21.335170       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0919 23:24:21.358336       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0919 23:24:21.358412       1 server_linux.go:132] "Using iptables Proxier"
	I0919 23:24:21.365768       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0919 23:24:21.366236       1 server.go:527] "Version info" version="v1.34.0"
	I0919 23:24:21.366273       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:24:21.367839       1 config.go:200] "Starting service config controller"
	I0919 23:24:21.367861       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I0919 23:24:21.367894       1 config.go:106] "Starting endpoint slice config controller"
	I0919 23:24:21.367900       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I0919 23:24:21.367914       1 config.go:403] "Starting serviceCIDR config controller"
	I0919 23:24:21.367920       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I0919 23:24:21.368152       1 config.go:309] "Starting node config controller"
	I0919 23:24:21.368216       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I0919 23:24:21.368243       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I0919 23:24:21.468363       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I0919 23:24:21.468363       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I0919 23:24:21.468426       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [480cfa6126d628ba560d44d5d8b51dd3f1945d2e876917a1761adfb2f06b0e3b] <==
	I0919 23:24:17.657488       1 serving.go:386] Generated self-signed cert in-memory
	W0919 23:24:19.063230       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0919 23:24:19.063363       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0919 23:24:19.063381       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0919 23:24:19.063392       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0919 23:24:19.100053       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.0"
	I0919 23:24:19.100092       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0919 23:24:19.103151       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:24:19.103212       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0919 23:24:19.103771       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I0919 23:24:19.103947       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0919 23:24:19.203828       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: E0919 23:25:17.537407    3202 file_linux.go:61] "Unable to read config path" err="unable to create inotify: too many open files" path="/etc/kubernetes/manifests"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.538488    3202 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="cri-o" version="1.24.6" apiVersion="v1"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.539200    3202 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.539252    3202 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: E0919 23:25:17.539327    3202 plugins.go:580] "Error initializing dynamic plugin prober" err="error initializing watcher: too many open files"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.540096    3202 server.go:1262] "Started kubelet"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.544401    3202 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.544736    3202 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: E0919 23:25:17.544892    3202 dynamic_serving_content.go:144] "Failed to watch cert and key file, will retry later" err="error creating fsnotify watcher: too many open files"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.545256    3202 volume_manager.go:313] "Starting Kubelet Volume Manager"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.549259    3202 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.549284    3202 factory.go:223] Registration of the systemd container factory successfully
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.547504    3202 desired_state_of_world_populator.go:146] "Desired state populator starts to run"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: E0919 23:25:17.551551    3202 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"embed-certs-756077\" not found"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.552049    3202 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.552156    3202 server_v1.go:49] "podresources" method="list" useActivePods=true
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.552334    3202 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.552332    3202 server.go:180] "Starting to listen" address="0.0.0.0" port=10250
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.552995    3202 reconciler.go:29] "Reconciler: start to sync state"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.553877    3202 server.go:310] "Adding debug handlers to kubelet server"
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: I0919 23:25:17.554352    3202 factory.go:223] Registration of the crio container factory successfully
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: E0919 23:25:17.554381    3202 manager.go:294] Registration of the raw container factory failed: inotify_init: too many open files
	Sep 19 23:25:17 embed-certs-756077 kubelet[3202]: E0919 23:25:17.554400    3202 kubelet.go:1686] "Failed to start cAdvisor" err="inotify_init: too many open files"
	Sep 19 23:25:17 embed-certs-756077 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Sep 19 23:25:17 embed-certs-756077 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	
	
	==> kubernetes-dashboard [e051b85cf03946dded3f9bf77644a87b7922c52954281fb05391ea42f51b01e1] <==
	2025/09/19 23:24:29 Starting overwatch
	2025/09/19 23:24:29 Using namespace: kubernetes-dashboard
	2025/09/19 23:24:29 Using in-cluster config to connect to apiserver
	2025/09/19 23:24:29 Using secret token for csrf signing
	2025/09/19 23:24:29 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/09/19 23:24:29 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/09/19 23:24:29 Successful initial request to the apiserver, version: v1.34.0
	2025/09/19 23:24:29 Generating JWE encryption key
	2025/09/19 23:24:29 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/09/19 23:24:29 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/09/19 23:24:30 Initializing JWE encryption key from synchronized object
	2025/09/19 23:24:30 Creating in-cluster Sidecar client
	2025/09/19 23:24:30 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/09/19 23:24:30 Serving insecurely on HTTP port: 9090
	2025/09/19 23:25:00 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [1efebf99a406777932ec2bbd2fa1743c46bcec1ce7aa44447249d1ee5211f846] <==
	I0919 23:24:21.152215       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0919 23:24:51.155011       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [945fba0cfc75cfa72d89c811e40790826c3fad7b17490d6249775590f6f464fe] <==
	I0919 23:24:51.809728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0919 23:24:51.817605       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0919 23:24:51.817652       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W0919 23:24:51.820001       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:24:55.275809       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:24:59.535843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:03.134914       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:06.188766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:09.211305       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:09.217642       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:25:09.217840       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0919 23:25:09.218047       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-756077_6c91a282-ba5f-42aa-aabf-24ef5336adfc!
	I0919 23:25:09.218048       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"882b713e-23d7-418e-98f1-b46e56a60afb", APIVersion:"v1", ResourceVersion:"692", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-756077_6c91a282-ba5f-42aa-aabf-24ef5336adfc became leader
	W0919 23:25:09.220199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:09.224967       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I0919 23:25:09.319055       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-756077_6c91a282-ba5f-42aa-aabf-24ef5336adfc!
	W0919 23:25:12.038113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:12.044740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:14.048375       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:14.052832       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:16.057747       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W0919 23:25:16.083121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-756077 -n embed-certs-756077
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-756077 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: metrics-server-746fcd58dc-8gz2l
helpers_test.go:282: ======> post-mortem[TestStartStop/group/embed-certs/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context embed-certs-756077 describe pod metrics-server-746fcd58dc-8gz2l
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context embed-certs-756077 describe pod metrics-server-746fcd58dc-8gz2l: exit status 1 (68.868319ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-746fcd58dc-8gz2l" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context embed-certs-756077 describe pod metrics-server-746fcd58dc-8gz2l: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.81s)

                                                
                                    

Test pass (286/329)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 5.07
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.06
9 TestDownloadOnly/v1.28.0/DeleteAll 0.22
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.0/json-events 4.14
13 TestDownloadOnly/v1.34.0/preload-exists 0
17 TestDownloadOnly/v1.34.0/LogsDuration 0.07
18 TestDownloadOnly/v1.34.0/DeleteAll 0.22
19 TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds 0.14
20 TestDownloadOnlyKic 1.18
21 TestBinaryMirror 0.84
22 TestOffline 94.38
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 157.93
31 TestAddons/serial/GCPAuth/Namespaces 0.16
32 TestAddons/serial/GCPAuth/FakeCredentials 10.51
35 TestAddons/parallel/Registry 15.48
36 TestAddons/parallel/RegistryCreds 0.64
38 TestAddons/parallel/InspektorGadget 5.27
39 TestAddons/parallel/MetricsServer 6.69
41 TestAddons/parallel/CSI 46.73
42 TestAddons/parallel/Headlamp 15.63
43 TestAddons/parallel/CloudSpanner 5.51
44 TestAddons/parallel/LocalPath 50.69
45 TestAddons/parallel/NvidiaDevicePlugin 6.51
46 TestAddons/parallel/Yakd 10.72
47 TestAddons/parallel/AmdGpuDevicePlugin 5.53
48 TestAddons/StoppedEnableDisable 16.55
49 TestCertOptions 28.51
50 TestCertExpiration 215.58
52 TestForceSystemdFlag 28.91
53 TestForceSystemdEnv 39.2
55 TestKVMDriverInstallOrUpdate 0.58
59 TestErrorSpam/setup 20.59
60 TestErrorSpam/start 0.64
61 TestErrorSpam/status 0.94
62 TestErrorSpam/pause 1.59
63 TestErrorSpam/unpause 1.69
64 TestErrorSpam/stop 2.55
67 TestFunctional/serial/CopySyncFile 0
68 TestFunctional/serial/StartWithProxy 69.17
69 TestFunctional/serial/AuditLog 0
70 TestFunctional/serial/SoftStart 6.66
71 TestFunctional/serial/KubeContext 0.05
72 TestFunctional/serial/KubectlGetPods 0.13
75 TestFunctional/serial/CacheCmd/cache/add_remote 3.14
76 TestFunctional/serial/CacheCmd/cache/add_local 1.05
77 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
78 TestFunctional/serial/CacheCmd/cache/list 0.05
79 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.3
80 TestFunctional/serial/CacheCmd/cache/cache_reload 1.79
81 TestFunctional/serial/CacheCmd/cache/delete 0.1
82 TestFunctional/serial/MinikubeKubectlCmd 0.11
83 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
84 TestFunctional/serial/ExtraConfig 42.49
85 TestFunctional/serial/ComponentHealth 0.07
86 TestFunctional/serial/LogsCmd 1.5
87 TestFunctional/serial/LogsFileCmd 1.52
88 TestFunctional/serial/InvalidService 3.84
90 TestFunctional/parallel/ConfigCmd 0.38
91 TestFunctional/parallel/DashboardCmd 8.98
92 TestFunctional/parallel/DryRun 0.42
93 TestFunctional/parallel/InternationalLanguage 0.19
94 TestFunctional/parallel/StatusCmd 1.09
99 TestFunctional/parallel/AddonsCmd 0.13
100 TestFunctional/parallel/PersistentVolumeClaim 25.89
102 TestFunctional/parallel/SSHCmd 0.56
103 TestFunctional/parallel/CpCmd 1.86
104 TestFunctional/parallel/MySQL 15.46
105 TestFunctional/parallel/FileSync 0.27
106 TestFunctional/parallel/CertSync 1.68
110 TestFunctional/parallel/NodeLabels 0.07
112 TestFunctional/parallel/NonActiveRuntimeDisabled 0.6
114 TestFunctional/parallel/License 0.28
116 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
117 TestFunctional/parallel/ProfileCmd/profile_list 0.46
118 TestFunctional/parallel/MountCmd/any-port 9.96
119 TestFunctional/parallel/ProfileCmd/profile_json_output 0.42
120 TestFunctional/parallel/Version/short 0.05
121 TestFunctional/parallel/Version/components 0.51
122 TestFunctional/parallel/ImageCommands/ImageListShort 0.22
123 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
124 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
125 TestFunctional/parallel/ImageCommands/ImageListYaml 0.24
126 TestFunctional/parallel/ImageCommands/ImageBuild 2.81
127 TestFunctional/parallel/ImageCommands/Setup 0.49
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.26
129 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.91
130 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.11
131 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.52
132 TestFunctional/parallel/ImageCommands/ImageRemove 0.58
133 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.68
134 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.83
135 TestFunctional/parallel/MountCmd/specific-port 2.03
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.41
138 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
140 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.22
141 TestFunctional/parallel/MountCmd/VerifyCleanup 1.76
142 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
143 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
147 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
148 TestFunctional/parallel/UpdateContextCmd/no_changes 0.14
149 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
150 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
151 TestFunctional/parallel/ServiceCmd/List 1.69
152 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
156 TestFunctional/delete_echo-server_images 0.04
157 TestFunctional/delete_my-image_image 0.02
158 TestFunctional/delete_minikube_cached_images 0.02
163 TestMultiControlPlane/serial/StartCluster 112.42
164 TestMultiControlPlane/serial/DeployApp 5.83
165 TestMultiControlPlane/serial/PingHostFromPods 1.11
167 TestMultiControlPlane/serial/NodeLabels 0.06
168 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.75
171 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.57
173 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.76
177 TestMultiControlPlane/serial/StopCluster 29.24
182 TestJSONOutput/start/Command 69.03
183 TestJSONOutput/start/Audit 0
185 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/pause/Command 0.79
189 TestJSONOutput/pause/Audit 0
191 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/unpause/Command 0.65
195 TestJSONOutput/unpause/Audit 0
197 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
200 TestJSONOutput/stop/Command 6.02
201 TestJSONOutput/stop/Audit 0
203 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
204 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
205 TestErrorJSONOutput 0.22
207 TestKicCustomNetwork/create_custom_network 30.33
208 TestKicCustomNetwork/use_default_bridge_network 23.72
209 TestKicExistingNetwork 26.46
210 TestKicCustomSubnet 24.48
211 TestKicStaticIP 26.84
212 TestMainNoArgs 0.06
213 TestMinikubeProfile 50.83
216 TestMountStart/serial/StartWithMountFirst 5.7
217 TestMountStart/serial/VerifyMountFirst 0.27
218 TestMountStart/serial/StartWithMountSecond 5.71
219 TestMountStart/serial/VerifyMountSecond 0.27
220 TestMountStart/serial/DeleteFirst 1.68
221 TestMountStart/serial/VerifyMountPostDelete 0.26
222 TestMountStart/serial/Stop 1.2
223 TestMountStart/serial/RestartStopped 7.41
224 TestMountStart/serial/VerifyMountPostStop 0.26
227 TestMultiNode/serial/FreshStart2Nodes 65.94
228 TestMultiNode/serial/DeployApp2Nodes 4.62
229 TestMultiNode/serial/PingHostFrom2Pods 0.75
230 TestMultiNode/serial/AddNode 53.77
231 TestMultiNode/serial/MultiNodeLabels 0.06
232 TestMultiNode/serial/ProfileList 0.65
233 TestMultiNode/serial/CopyFile 9.6
234 TestMultiNode/serial/StopNode 2.28
235 TestMultiNode/serial/StartAfterStop 7.69
236 TestMultiNode/serial/RestartKeepsNodes 80.98
237 TestMultiNode/serial/DeleteNode 5.28
238 TestMultiNode/serial/StopMultiNode 30.57
239 TestMultiNode/serial/RestartMultiNode 48.88
240 TestMultiNode/serial/ValidateNameConflict 25.34
245 TestPreload 112.79
247 TestScheduledStopUnix 100.36
250 TestInsufficientStorage 9.95
251 TestRunningBinaryUpgrade 47.97
253 TestKubernetesUpgrade 333.65
254 TestMissingContainerUpgrade 82.65
256 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
257 TestStoppedBinaryUpgrade/Setup 0.7
258 TestNoKubernetes/serial/StartWithK8s 37.6
259 TestStoppedBinaryUpgrade/Upgrade 62.61
260 TestNoKubernetes/serial/StartWithStopK8s 18.39
261 TestNoKubernetes/serial/Start 5.58
262 TestNoKubernetes/serial/VerifyK8sNotRunning 0.35
263 TestNoKubernetes/serial/ProfileList 2.07
264 TestStoppedBinaryUpgrade/MinikubeLogs 1.06
265 TestNoKubernetes/serial/Stop 1.21
266 TestNoKubernetes/serial/StartNoArgs 7.12
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.3
276 TestPause/serial/Start 41.54
277 TestPause/serial/SecondStartNoReconfiguration 7.25
278 TestPause/serial/Pause 0.72
279 TestPause/serial/VerifyStatus 0.32
283 TestPause/serial/Unpause 0.69
284 TestPause/serial/PauseAgain 0.78
285 TestPause/serial/DeletePaused 2.75
290 TestNetworkPlugins/group/false 3.25
291 TestPause/serial/VerifyDeletedResources 2.47
296 TestStartStop/group/old-k8s-version/serial/FirstStart 54.49
298 TestStartStop/group/no-preload/serial/FirstStart 52.29
299 TestStartStop/group/old-k8s-version/serial/DeployApp 10.34
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.92
301 TestStartStop/group/old-k8s-version/serial/Stop 16.31
302 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
303 TestStartStop/group/old-k8s-version/serial/SecondStart 51.46
304 TestStartStop/group/no-preload/serial/DeployApp 9.32
305 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.87
306 TestStartStop/group/no-preload/serial/Stop 18.32
307 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.19
308 TestStartStop/group/no-preload/serial/SecondStart 53.75
309 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
310 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
311 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
312 TestStartStop/group/old-k8s-version/serial/Pause 2.89
314 TestStartStop/group/embed-certs/serial/FirstStart 72.13
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 70.26
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.08
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.25
322 TestStartStop/group/newest-cni/serial/FirstStart 26.49
323 TestStartStop/group/embed-certs/serial/DeployApp 10.26
324 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.85
326 TestStartStop/group/newest-cni/serial/Stop 2.4
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.18
328 TestStartStop/group/newest-cni/serial/SecondStart 11.94
329 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.88
330 TestStartStop/group/embed-certs/serial/Stop 18.37
331 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.34
332 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
333 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
334 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.23
335 TestStartStop/group/newest-cni/serial/Pause 2.66
336 TestNetworkPlugins/group/auto/Start 70.47
337 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.7
338 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.19
339 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
340 TestStartStop/group/embed-certs/serial/SecondStart 48.77
341 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
342 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.46
343 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
344 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.08
345 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.23
347 TestNetworkPlugins/group/auto/KubeletFlags 0.33
348 TestNetworkPlugins/group/auto/NetCatPod 10.24
349 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
350 TestNetworkPlugins/group/kindnet/Start 47.24
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.08
352 TestNetworkPlugins/group/auto/DNS 0.14
353 TestNetworkPlugins/group/auto/Localhost 0.11
354 TestNetworkPlugins/group/auto/HairPin 0.11
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
356 TestNetworkPlugins/group/calico/Start 57.58
357 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.28
358 TestNetworkPlugins/group/custom-flannel/Start 50.06
359 TestNetworkPlugins/group/enable-default-cni/Start 100.67
360 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
361 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
362 TestNetworkPlugins/group/kindnet/NetCatPod 9.18
363 TestNetworkPlugins/group/kindnet/DNS 0.15
364 TestNetworkPlugins/group/kindnet/Localhost 0.11
365 TestNetworkPlugins/group/kindnet/HairPin 0.11
366 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.32
367 TestNetworkPlugins/group/calico/ControllerPod 6.01
368 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.24
369 TestNetworkPlugins/group/calico/KubeletFlags 0.29
370 TestNetworkPlugins/group/calico/NetCatPod 8.27
371 TestNetworkPlugins/group/custom-flannel/DNS 0.16
372 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
373 TestNetworkPlugins/group/custom-flannel/HairPin 0.13
374 TestNetworkPlugins/group/calico/DNS 0.15
375 TestNetworkPlugins/group/calico/Localhost 0.13
376 TestNetworkPlugins/group/calico/HairPin 0.13
377 TestNetworkPlugins/group/flannel/Start 49.85
378 TestNetworkPlugins/group/bridge/Start 63.59
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.3
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
383 TestNetworkPlugins/group/enable-default-cni/Localhost 0.12
384 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
386 TestNetworkPlugins/group/flannel/NetCatPod 8.19
387 TestNetworkPlugins/group/flannel/DNS 0.15
388 TestNetworkPlugins/group/flannel/Localhost 0.13
389 TestNetworkPlugins/group/flannel/HairPin 0.12
390 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
391 TestNetworkPlugins/group/bridge/NetCatPod 8.21
392 TestNetworkPlugins/group/bridge/DNS 0.14
393 TestNetworkPlugins/group/bridge/Localhost 0.11
394 TestNetworkPlugins/group/bridge/HairPin 0.12
x
+
TestDownloadOnly/v1.28.0/json-events (5.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-482753 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-482753 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (5.072487514s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (5.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I0919 22:14:06.415507   18175 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
I0919 22:14:06.415589   18175 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-482753
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-482753: exit status 85 (61.489675ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-482753 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-482753 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:01
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:01.389799   18188 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:01.390066   18188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:01.390076   18188 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:01.390081   18188 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:01.390286   18188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	W0919 22:14:01.390422   18188 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21594-14668/.minikube/config/config.json: open /home/jenkins/minikube-integration/21594-14668/.minikube/config/config.json: no such file or directory
	I0919 22:14:01.391403   18188 out.go:368] Setting JSON to true
	I0919 22:14:01.392277   18188 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3391,"bootTime":1758316650,"procs":204,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:14:01.392371   18188 start.go:140] virtualization: kvm guest
	I0919 22:14:01.394886   18188 out.go:99] [download-only-482753] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W0919 22:14:01.395036   18188 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball: no such file or directory
	I0919 22:14:01.395094   18188 notify.go:220] Checking for updates...
	I0919 22:14:01.396657   18188 out.go:171] MINIKUBE_LOCATION=21594
	I0919 22:14:01.398125   18188 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:01.399726   18188 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:14:01.401009   18188 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:14:01.402413   18188 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 22:14:01.404950   18188 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 22:14:01.405234   18188 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:01.430149   18188 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:14:01.430237   18188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:01.901216   18188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-19 22:14:01.888963516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:01.901324   18188 docker.go:318] overlay module found
	I0919 22:14:01.903073   18188 out.go:99] Using the docker driver based on user configuration
	I0919 22:14:01.903097   18188 start.go:304] selected driver: docker
	I0919 22:14:01.903112   18188 start.go:918] validating driver "docker" against <nil>
	I0919 22:14:01.903197   18188 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:01.962896   18188 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-09-19 22:14:01.952340614 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:01.963189   18188 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:01.963877   18188 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0919 22:14:01.964087   18188 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 22:14:01.966022   18188 out.go:171] Using Docker driver with root privileges
	I0919 22:14:01.967373   18188 cni.go:84] Creating CNI manager for ""
	I0919 22:14:01.967454   18188 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0919 22:14:01.967468   18188 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I0919 22:14:01.967560   18188 start.go:348] cluster config:
	{Name:download-only-482753 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-482753 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:14:01.968994   18188 out.go:99] Starting "download-only-482753" primary control-plane node in "download-only-482753" cluster
	I0919 22:14:01.969062   18188 cache.go:123] Beginning downloading kic base image for docker with crio
	I0919 22:14:01.970589   18188 out.go:99] Pulling base image v0.0.48 ...
	I0919 22:14:01.970651   18188 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0919 22:14:01.970764   18188 image.go:81] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local docker daemon
	I0919 22:14:01.989353   18188 cache.go:152] Downloading gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:01.989417   18188 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:14:01.989453   18188 cache.go:58] Caching tarball of preloaded images
	I0919 22:14:01.989539   18188 image.go:65] Checking for gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 in local cache directory
	I0919 22:14:01.989569   18188 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0919 22:14:01.989657   18188 image.go:150] Writing gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 to local cache
	I0919 22:14:01.991357   18188 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I0919 22:14:01.991376   18188 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 22:14:02.018464   18188 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:72bc7f8573f574c02d8c9a9b3496176b -> /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
	I0919 22:14:04.708177   18188 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 22:14:04.708304   18188 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4 ...
	I0919 22:14:05.736907   18188 cache.go:61] Finished verifying existence of preloaded tar for v1.28.0 on crio
	I0919 22:14:05.737411   18188 profile.go:143] Saving config to /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/download-only-482753/config.json ...
	I0919 22:14:05.737459   18188 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/download-only-482753/config.json: {Name:mk1894feef4594514bddd7352437e843d320847a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0919 22:14:05.737655   18188 preload.go:131] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I0919 22:14:05.737910   18188 download.go:108] Downloading: https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.0/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/21594-14668/.minikube/cache/linux/amd64/v1.28.0/kubectl
	I0919 22:14:05.935758   18188 cache.go:155] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 as a tarball
	
	
	* The control-plane node download-only-482753 host does not exist
	  To start a cluster, run: "minikube start -p download-only-482753"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-482753
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/json-events (4.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-997981 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-997981 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.140557353s)
--- PASS: TestDownloadOnly/v1.34.0/json-events (4.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/preload-exists
I0919 22:14:10.975930   18175 preload.go:131] Checking if preload exists for k8s version v1.34.0 and runtime crio
I0919 22:14:10.975976   18175 preload.go:146] Found local preload: /home/jenkins/minikube-integration/21594-14668/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-997981
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-997981: exit status 85 (70.614845ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-482753 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-482753 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ delete  │ -p download-only-482753                                                                                                                                                   │ download-only-482753 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │ 19 Sep 25 22:14 UTC │
	│ start   │ -o=json --download-only -p download-only-997981 --force --alsologtostderr --kubernetes-version=v1.34.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-997981 │ jenkins │ v1.37.0 │ 19 Sep 25 22:14 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/09/19 22:14:06
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0919 22:14:06.875958   18534 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:14:06.876267   18534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:06.876279   18534 out.go:374] Setting ErrFile to fd 2...
	I0919 22:14:06.876283   18534 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:14:06.876480   18534 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:14:06.876934   18534 out.go:368] Setting JSON to true
	I0919 22:14:06.877816   18534 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3397,"bootTime":1758316650,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:14:06.877904   18534 start.go:140] virtualization: kvm guest
	I0919 22:14:06.879916   18534 out.go:99] [download-only-997981] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:14:06.880124   18534 notify.go:220] Checking for updates...
	I0919 22:14:06.881520   18534 out.go:171] MINIKUBE_LOCATION=21594
	I0919 22:14:06.883048   18534 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:14:06.884393   18534 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:14:06.885786   18534 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:14:06.887294   18534 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W0919 22:14:06.890511   18534 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0919 22:14:06.890759   18534 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:14:06.917225   18534 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:14:06.917294   18534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:06.974369   18534 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-19 22:14:06.962984476 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:06.974484   18534 docker.go:318] overlay module found
	I0919 22:14:06.976406   18534 out.go:99] Using the docker driver based on user configuration
	I0919 22:14:06.976449   18534 start.go:304] selected driver: docker
	I0919 22:14:06.976458   18534 start.go:918] validating driver "docker" against <nil>
	I0919 22:14:06.976542   18534 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:14:07.032820   18534 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-09-19 22:14:07.022605677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:14:07.032961   18534 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I0919 22:14:07.033496   18534 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I0919 22:14:07.033644   18534 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I0919 22:14:07.035728   18534 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-997981 host does not exist
	  To start a cluster, run: "minikube start -p download-only-997981"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.0/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-997981
--- PASS: TestDownloadOnly/v1.34.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.18s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-047394 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-047394" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-047394
--- PASS: TestDownloadOnlyKic (1.18s)

                                                
                                    
x
+
TestBinaryMirror (0.84s)

                                                
                                                
=== RUN   TestBinaryMirror
I0919 22:14:12.854184   18175 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.0/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-507391 --alsologtostderr --binary-mirror http://127.0.0.1:35061 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-507391" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-507391
--- PASS: TestBinaryMirror (0.84s)

                                                
                                    
x
+
TestOffline (94.38s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-117579 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-117579 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m31.151908799s)
helpers_test.go:175: Cleaning up "offline-crio-117579" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-117579
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-117579: (3.225812664s)
--- PASS: TestOffline (94.38s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-120954
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-120954: exit status 85 (55.025748ms)

                                                
                                                
-- stdout --
	* Profile "addons-120954" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-120954"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-120954
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-120954: exit status 85 (54.300889ms)

                                                
                                                
-- stdout --
	* Profile "addons-120954" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-120954"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (157.93s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-120954 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-120954 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m37.927842812s)
--- PASS: TestAddons/Setup (157.93s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-120954 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-120954 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.16s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-120954 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-120954 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [a0354461-0e45-4a4d-8c94-80d5da619001] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [a0354461-0e45-4a4d-8c94-80d5da619001] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.00462149s
addons_test.go:694: (dbg) Run:  kubectl --context addons-120954 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-120954 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-120954 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 27.610708ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-66898fdd98-pqdlc" [c19f5c5c-2819-451f-b495-8a22b5069243] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003063358s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-2vkz7" [f689ed57-4298-4236-b617-6dc01230bcb8] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003806094s
addons_test.go:392: (dbg) Run:  kubectl --context addons-120954 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-120954 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-120954 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.630902744s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 ip
2025/09/19 22:17:25 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.48s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.086579ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-120954
addons_test.go:332: (dbg) Run:  kubectl --context addons-120954 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (0.64s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-28pr2" [2902baa3-e37c-4a3e-9773-9bed0d35f4c0] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003833581s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable inspektor-gadget --alsologtostderr -v=1
--- PASS: TestAddons/parallel/InspektorGadget (5.27s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.69s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 27.646037ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-jkm77" [eb1ab5e8-c236-46d0-909d-5ecf7244e6da] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.002823222s
addons_test.go:463: (dbg) Run:  kubectl --context addons-120954 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.69s)

                                                
                                    
x
+
TestAddons/parallel/CSI (46.73s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0919 22:17:26.652558   18175 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0919 22:17:26.655253   18175 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0919 22:17:26.655277   18175 kapi.go:107] duration metric: took 2.724216ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 2.7339ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-120954 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-120954 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [6c25dafd-ca08-4e1d-9b48-a8b08758fe0b] Pending
helpers_test.go:352: "task-pv-pod" [6c25dafd-ca08-4e1d-9b48-a8b08758fe0b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [6c25dafd-ca08-4e1d-9b48-a8b08758fe0b] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.002951208s
addons_test.go:572: (dbg) Run:  kubectl --context addons-120954 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-120954 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-120954 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-120954 delete pod task-pv-pod
addons_test.go:588: (dbg) Run:  kubectl --context addons-120954 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-120954 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-120954 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [7ab2833c-2c55-4abc-99b0-95e1e3940bd6] Pending
helpers_test.go:352: "task-pv-pod-restore" [7ab2833c-2c55-4abc-99b0-95e1e3940bd6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [7ab2833c-2c55-4abc-99b0-95e1e3940bd6] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.00324181s
addons_test.go:614: (dbg) Run:  kubectl --context addons-120954 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-120954 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-120954 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-120954 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.570966784s)
--- PASS: TestAddons/parallel/CSI (46.73s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-120954 --alsologtostderr -v=1
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:352: "headlamp-85f8f8dc54-8rxhq" [970a4f7d-ac2d-41b1-bd54-f2d172063d30] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:352: "headlamp-85f8f8dc54-8rxhq" [970a4f7d-ac2d-41b1-bd54-f2d172063d30] Running
addons_test.go:813: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 9.003543835s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-120954 addons disable headlamp --alsologtostderr -v=1: (5.83869877s)
--- PASS: TestAddons/parallel/Headlamp (15.63s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-85f6b7fc65-wjwp4" [15628b0a-4f3d-4f40-bc98-af92c7a37c07] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004015935s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.51s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (50.69s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-120954 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-120954 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-120954 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [83677358-0fd3-4923-8107-4fa48d1b852b] Pending
helpers_test.go:352: "test-local-path" [83677358-0fd3-4923-8107-4fa48d1b852b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [83677358-0fd3-4923-8107-4fa48d1b852b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [83677358-0fd3-4923-8107-4fa48d1b852b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004891406s
addons_test.go:967: (dbg) Run:  kubectl --context addons-120954 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 ssh "cat /opt/local-path-provisioner/pvc-9d1a6fb2-cfbb-47a0-a7e7-bc0b5a2d6b34_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-120954 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-120954 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-120954 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.782319943s)
--- PASS: TestAddons/parallel/LocalPath (50.69s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-8d8t7" [6fcc270d-33df-4191-bb2c-39cd16df1785] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0032781s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.51s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-zj5n9" [575c3729-db36-4e2b-9c4c-6b2ab92c79e5] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.004177083s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Done: out/minikube-linux-amd64 -p addons-120954 addons disable yakd --alsologtostderr -v=1: (5.716045988s)
--- PASS: TestAddons/parallel/Yakd (10.72s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.53s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-s454z" [03feb907-e169-41cb-af89-34fab99054dd] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003201496s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (5.53s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.55s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-120954
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-120954: (16.293005658s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-120954
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-120954
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-120954
--- PASS: TestAddons/StoppedEnableDisable (16.55s)

                                                
                                    
x
+
TestCertOptions (28.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-886813 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-886813 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (25.50213633s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-886813 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-886813 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-886813 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-886813" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-886813
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-886813: (2.376047255s)
--- PASS: TestCertOptions (28.51s)

                                                
                                    
x
+
TestCertExpiration (215.58s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-463082 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-463082 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.205681814s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-463082 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-463082 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (5.882132014s)
helpers_test.go:175: Cleaning up "cert-expiration-463082" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-463082
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-463082: (2.494444695s)
--- PASS: TestCertExpiration (215.58s)

                                                
                                    
x
+
TestForceSystemdFlag (28.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-849745 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-849745 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (25.774817512s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-849745 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-849745" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-849745
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-849745: (2.756778127s)
--- PASS: TestForceSystemdFlag (28.91s)

                                                
                                    
x
+
TestForceSystemdEnv (39.2s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-210125 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-210125 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.57970445s)
helpers_test.go:175: Cleaning up "force-systemd-env-210125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-210125
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-210125: (2.624220807s)
--- PASS: TestForceSystemdEnv (39.20s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0.58s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0919 23:19:53.849730   18175 install.go:51] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0919 23:19:53.849930   18175 install.go:123] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate797112666/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 23:19:53.880320   18175 install.go:134] /tmp/TestKVMDriverInstallOrUpdate797112666/001/docker-machine-driver-kvm2 version is {Version:v1.1.1 Commit:40a1a986a50eac533e396012e35516d3d6c63f36-dirty}
W0919 23:19:53.880397   18175 install.go:61] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.37.0 or later
W0919 23:19:53.880493   18175 out.go:176] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0919 23:19:53.880540   18175 download.go:108] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.37.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate797112666/001/docker-machine-driver-kvm2
I0919 23:19:54.286599   18175 install.go:123] Validating docker-machine-driver-kvm2, PATH=/tmp/TestKVMDriverInstallOrUpdate797112666/001:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0919 23:19:54.304691   18175 install.go:134] /tmp/TestKVMDriverInstallOrUpdate797112666/001/docker-machine-driver-kvm2 version is {Version:v1.37.0 Commit:1af8bdc072232de4b1fec3b6cc0e8337e118bc83}
--- PASS: TestKVMDriverInstallOrUpdate (0.58s)

                                                
                                    
x
+
TestErrorSpam/setup (20.59s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-580137 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-580137 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-580137 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-580137 --driver=docker  --container-runtime=crio: (20.593291412s)
--- PASS: TestErrorSpam/setup (20.59s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (1.59s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 pause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 pause
--- PASS: TestErrorSpam/pause (1.59s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.69s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 unpause
--- PASS: TestErrorSpam/unpause (1.69s)

                                                
                                    
x
+
TestErrorSpam/stop (2.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 stop: (2.363056004s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-580137 --log_dir /tmp/nospam-580137 stop
--- PASS: TestErrorSpam/stop (2.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21594-14668/.minikube/files/etc/test/nested/copy/18175/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (69.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-393395 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0919 22:21:52.324362   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:52.338712   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:52.350195   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:52.371710   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:52.413207   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:52.494713   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:52.656214   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:52.977932   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:53.620022   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:21:54.901491   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-393395 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (1m9.172214789s)
--- PASS: TestFunctional/serial/StartWithProxy (69.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6.66s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0919 22:21:55.447761   18175 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-393395 --alsologtostderr -v=8
E0919 22:21:57.463907   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-393395 --alsologtostderr -v=8: (6.660047255s)
functional_test.go:678: soft start took 6.660739965s for "functional-393395" cluster.
I0919 22:22:02.108148   18175 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/SoftStart (6.66s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-393395 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 cache add registry.k8s.io/pause:3.1
E0919 22:22:02.586273   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-393395 cache add registry.k8s.io/pause:3.1: (1.047028832s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-linux-amd64 -p functional-393395 cache add registry.k8s.io/pause:3.3: (1.098616287s)
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.14s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-393395 /tmp/TestFunctionalserialCacheCmdcacheadd_local3582163614/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 cache add minikube-local-cache-test:functional-393395
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 cache delete minikube-local-cache-test:functional-393395
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-393395
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (287.804228ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.79s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 kubectl -- --context functional-393395 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-393395 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-393395 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0919 22:22:12.828570   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:22:33.310386   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-393395 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.490597296s)
functional_test.go:776: restart took 42.490720797s for "functional-393395" cluster.
I0919 22:22:51.484951   18175 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestFunctional/serial/ExtraConfig (42.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-393395 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-393395 logs: (1.498584811s)
--- PASS: TestFunctional/serial/LogsCmd (1.50s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 logs --file /tmp/TestFunctionalserialLogsFileCmd1331903792/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-393395 logs --file /tmp/TestFunctionalserialLogsFileCmd1331903792/001/logs.txt: (1.518549428s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.52s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.84s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-393395 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-393395
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-393395: exit status 115 (344.062156ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30571 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-393395 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.84s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 config get cpus: exit status 14 (78.38066ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 config get cpus: exit status 14 (55.499783ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-393395 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-393395 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 56605: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.98s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-393395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-393395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (169.04952ms)

                                                
                                                
-- stdout --
	* [functional-393395] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:23:00.167776   55605 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:23:00.168061   55605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:00.168071   55605 out.go:374] Setting ErrFile to fd 2...
	I0919 22:23:00.168078   55605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:00.168309   55605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:23:00.168787   55605 out.go:368] Setting JSON to false
	I0919 22:23:00.169764   55605 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3930,"bootTime":1758316650,"procs":217,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:23:00.169855   55605 start.go:140] virtualization: kvm guest
	I0919 22:23:00.172039   55605 out.go:179] * [functional-393395] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 22:23:00.174158   55605 notify.go:220] Checking for updates...
	I0919 22:23:00.174194   55605 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:23:00.175751   55605 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:23:00.178359   55605 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:23:00.180450   55605 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:23:00.181678   55605 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:23:00.183236   55605 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:23:00.185216   55605 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:23:00.185762   55605 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:23:00.213453   55605 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:23:00.213562   55605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:00.275139   55605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-19 22:23:00.26329189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:00.275300   55605 docker.go:318] overlay module found
	I0919 22:23:00.277755   55605 out.go:179] * Using the docker driver based on existing profile
	I0919 22:23:00.279230   55605 start.go:304] selected driver: docker
	I0919 22:23:00.279248   55605 start.go:918] validating driver "docker" against &{Name:functional-393395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-393395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:00.279343   55605 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:23:00.281305   55605 out.go:203] 
	W0919 22:23:00.282735   55605 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0919 22:23:00.283949   55605 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-393395 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-393395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-393395 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (190.735778ms)

                                                
                                                
-- stdout --
	* [functional-393395] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:23:00.323674   55715 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:23:00.323771   55715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:00.323776   55715 out.go:374] Setting ErrFile to fd 2...
	I0919 22:23:00.323780   55715 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:23:00.324048   55715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:23:00.324476   55715 out.go:368] Setting JSON to false
	I0919 22:23:00.325355   55715 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3930,"bootTime":1758316650,"procs":216,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 22:23:00.325448   55715 start.go:140] virtualization: kvm guest
	I0919 22:23:00.328187   55715 out.go:179] * [functional-393395] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I0919 22:23:00.329629   55715 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 22:23:00.329665   55715 notify.go:220] Checking for updates...
	I0919 22:23:00.332359   55715 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 22:23:00.334193   55715 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 22:23:00.336341   55715 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 22:23:00.337823   55715 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 22:23:00.340147   55715 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 22:23:00.342342   55715 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:23:00.343064   55715 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 22:23:00.369714   55715 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 22:23:00.369870   55715 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 22:23:00.445595   55715 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-09-19 22:23:00.430589902 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 22:23:00.445709   55715 docker.go:318] overlay module found
	I0919 22:23:00.450394   55715 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I0919 22:23:00.452886   55715 start.go:304] selected driver: docker
	I0919 22:23:00.452906   55715 start.go:918] validating driver "docker" against &{Name:functional-393395 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.48@sha256:7171c97a51623558720f8e5878e4f4637da093e2f2ed589997bedc6c1549b2b1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.0 ClusterName:functional-393395 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerN
ames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Mou
ntUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0919 22:23:00.452996   55715 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 22:23:00.455058   55715 out.go:203] 
	W0919 22:23:00.456542   55715 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0919 22:23:00.458285   55715 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.09s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [3a36486f-942d-409a-8694-d335efc830c9] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004401054s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-393395 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-393395 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-393395 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-393395 apply -f testdata/storage-provisioner/pod.yaml
I0919 22:23:14.250909   18175 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [d8542139-209b-4db0-b169-baf8c9cc0ae1] Pending
E0919 22:23:14.271775   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "sp-pod" [d8542139-209b-4db0-b169-baf8c9cc0ae1] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [d8542139-209b-4db0-b169-baf8c9cc0ae1] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.00513433s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-393395 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-393395 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-393395 delete -f testdata/storage-provisioner/pod.yaml: (1.105666669s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-393395 apply -f testdata/storage-provisioner/pod.yaml
I0919 22:23:25.637631   18175 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8b16bcbd-663e-49d4-98e9-53354b326452] Pending
helpers_test.go:352: "sp-pod" [8b16bcbd-663e-49d4-98e9-53354b326452] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8b16bcbd-663e-49d4-98e9-53354b326452] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.004196413s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-393395 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.89s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh -n functional-393395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 cp functional-393395:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2567704419/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh -n functional-393395 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh -n functional-393395 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (15.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-393395 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-9dhtx" [1cfd0d68-58b4-406e-a4e5-085226440d2d] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-9dhtx" [1cfd0d68-58b4-406e-a4e5-085226440d2d] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 14.003776224s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-393395 exec mysql-5bb876957f-9dhtx -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-393395 exec mysql-5bb876957f-9dhtx -- mysql -ppassword -e "show databases;": exit status 1 (120.382476ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0919 22:23:35.036066   18175 retry.go:31] will retry after 1.049381043s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-393395 exec mysql-5bb876957f-9dhtx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (15.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/18175/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "sudo cat /etc/test/nested/copy/18175/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/18175.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "sudo cat /etc/ssl/certs/18175.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/18175.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "sudo cat /usr/share/ca-certificates/18175.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/181752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "sudo cat /etc/ssl/certs/181752.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/181752.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "sudo cat /usr/share/ca-certificates/181752.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-393395 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 ssh "sudo systemctl is-active docker": exit status 1 (284.513671ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 ssh "sudo systemctl is-active containerd": exit status 1 (310.652339ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.60s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "386.257892ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "71.114274ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-393395 /tmp/TestFunctionalparallelMountCmdany-port2458344921/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1758320579504758209" to /tmp/TestFunctionalparallelMountCmdany-port2458344921/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1758320579504758209" to /tmp/TestFunctionalparallelMountCmdany-port2458344921/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1758320579504758209" to /tmp/TestFunctionalparallelMountCmdany-port2458344921/001/test-1758320579504758209
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (308.568953ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:22:59.813674   18175 retry.go:31] will retry after 566.128212ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 19 22:22 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 19 22:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 19 22:22 test-1758320579504758209
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh cat /mount-9p/test-1758320579504758209
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-393395 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [d192dd2a-c097-475b-8c86-7017b5553ffb] Pending
helpers_test.go:352: "busybox-mount" [d192dd2a-c097-475b-8c86-7017b5553ffb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [d192dd2a-c097-475b-8c86-7017b5553ffb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [d192dd2a-c097-475b-8c86-7017b5553ffb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.006126996s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-393395 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "sudo umount -f /mount-9p"
2025/09/19 22:23:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-393395 /tmp/TestFunctionalparallelMountCmdany-port2458344921/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.96s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "361.141171ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "58.72016ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-393395 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.0
registry.k8s.io/kube-proxy:v1.34.0
registry.k8s.io/kube-controller-manager:v1.34.0
registry.k8s.io/kube-apiserver:v1.34.0
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
localhost/minikube-local-cache-test:functional-393395
localhost/kicbase/echo-server:functional-393395
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-393395 image ls --format short --alsologtostderr:
I0919 22:23:35.362436   61987 out.go:360] Setting OutFile to fd 1 ...
I0919 22:23:35.362722   61987 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:23:35.362734   61987 out.go:374] Setting ErrFile to fd 2...
I0919 22:23:35.362738   61987 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:23:35.362931   61987 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
I0919 22:23:35.363551   61987 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:23:35.363646   61987 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:23:35.363994   61987 cli_runner.go:164] Run: docker container inspect functional-393395 --format={{.State.Status}}
I0919 22:23:35.382955   61987 ssh_runner.go:195] Run: systemctl --version
I0919 22:23:35.383001   61987 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-393395
I0919 22:23:35.401297   61987 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/functional-393395/id_rsa Username:docker}
I0919 22:23:35.495421   61987 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-393395 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/minikube-local-cache-test     │ functional-393395  │ efff3ae4ce087 │ 3.33kB │
│ registry.k8s.io/kube-apiserver          │ v1.34.0            │ 90550c43ad2bc │ 89.1MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ docker.io/library/nginx                 │ alpine             │ 4a86014ec6994 │ 53.9MB │
│ docker.io/library/nginx                 │ latest             │ 41f689c209100 │ 197MB  │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-controller-manager │ v1.34.0            │ a0af72f2ec6d6 │ 76MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ localhost/kicbase/echo-server           │ functional-393395  │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ registry.k8s.io/kube-proxy              │ v1.34.0            │ df0860106674d │ 73.1MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.0            │ 46169d968e920 │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-393395 image ls --format table --alsologtostderr:
I0919 22:23:36.475854   62373 out.go:360] Setting OutFile to fd 1 ...
I0919 22:23:36.476092   62373 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:23:36.476119   62373 out.go:374] Setting ErrFile to fd 2...
I0919 22:23:36.476125   62373 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:23:36.476364   62373 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
I0919 22:23:36.477076   62373 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:23:36.477232   62373 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:23:36.477678   62373 cli_runner.go:164] Run: docker container inspect functional-393395 --format={{.State.Status}}
I0919 22:23:36.496588   62373 ssh_runner.go:195] Run: systemctl --version
I0919 22:23:36.496652   62373 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-393395
I0919 22:23:36.516647   62373 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/functional-393395/id_rsa Username:docker}
I0919 22:23:36.610927   62373 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-393395 image ls --format json --alsologtostderr:
[{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81","repoDigests":["docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285","docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e"],"repoTags":["docker.io/library/nginx:latest"],"size":"196550530"},{"id":"46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc","repoDigests":["registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140","registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff"],"repoTags":["regi
stry.k8s.io/kube-scheduler:v1.34.0"],"size":"53844823"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9","repoDigests":["docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8","docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a"],"repoTags":["docker.io/library/nginx:alpine"],"size":"53949946"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["localhost/kicbase/echo-server:functional-393395"],"size":"4943877"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s
.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90","repoDigests":["registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86","registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.0"],"size":"89050097"},{"id":"df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce","repoDigests":["registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067","registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.0"],"size":"73138071"},{"id":"cd073f4c5f6
a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha2
56:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"efff3ae4ce0874bb2b8563ea2
c9206f8bb40dfeff716e3c799310eb91abf3b1c","repoDigests":["localhost/minikube-local-cache-test@sha256:5331b3fb6a398f2a46617aa36847bfdf308e38129b36d51db5b1c322b2926cf8"],"repoTags":["localhost/minikube-local-cache-test:functional-393395"],"size":"3330"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924
a1d5a9f8ce6f6","registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.0"],"size":"76004183"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-393395 image ls --format json --alsologtostderr:
I0919 22:23:36.249413   62282 out.go:360] Setting OutFile to fd 1 ...
I0919 22:23:36.249691   62282 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:23:36.249702   62282 out.go:374] Setting ErrFile to fd 2...
I0919 22:23:36.249707   62282 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:23:36.249896   62282 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
I0919 22:23:36.250502   62282 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:23:36.250591   62282 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:23:36.250970   62282 cli_runner.go:164] Run: docker container inspect functional-393395 --format={{.State.Status}}
I0919 22:23:36.271752   62282 ssh_runner.go:195] Run: systemctl --version
I0919 22:23:36.271822   62282 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-393395
I0919 22:23:36.291879   62282 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/functional-393395/id_rsa Username:docker}
I0919 22:23:36.389129   62282 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-393395 image ls --format yaml --alsologtostderr:
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: 90550c43ad2bcfd11fcd5fd27d2eac5a7ca823be1308884b33dd816ec169be90
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:495d3253a47a9a64a62041d518678c8b101fb628488e729d9f52ddff7cf82a86
- registry.k8s.io/kube-apiserver@sha256:fe86fe2f64021df8efa1a939a290bc21c8c128c66fc00ebbb6b5dea4c7a06812
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.0
size: "89050097"
- id: a0af72f2ec6d628152b015a46d4074df8f77d5b686978987c70f48b8c7660634
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:82ea603ed3cce63f9f870f22299741e0011318391cf722dd924a1d5a9f8ce6f6
- registry.k8s.io/kube-controller-manager@sha256:f8ba6c082136e2c85ce71628c59c6574ca4b67f162911cb200c0a51a5b9bff81
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.0
size: "76004183"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 4a86014ec6994761b7f3118cf47e4b4fd6bac15fc6fa262c4f356386bbc0e9d9
repoDigests:
- docker.io/library/nginx@sha256:42a516af16b852e33b7682d5ef8acbd5d13fe08fecadc7ed98605ba5e3b26ab8
- docker.io/library/nginx@sha256:60e48a050b6408d0c5dd59b98b6e36bf0937a0bbe99304e3e9c0e63b7563443a
repoTags:
- docker.io/library/nginx:alpine
size: "53949946"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: efff3ae4ce0874bb2b8563ea2c9206f8bb40dfeff716e3c799310eb91abf3b1c
repoDigests:
- localhost/minikube-local-cache-test@sha256:5331b3fb6a398f2a46617aa36847bfdf308e38129b36d51db5b1c322b2926cf8
repoTags:
- localhost/minikube-local-cache-test:functional-393395
size: "3330"
- id: df0860106674df871eebbd01fede90c764bf472f5b97eca7e945761292e9b0ce
repoDigests:
- registry.k8s.io/kube-proxy@sha256:364da8a25c742d7a35e9635cb8cf42c1faf5b367760d0f9f9a75bdd1f9d52067
- registry.k8s.io/kube-proxy@sha256:5f71731a5eadcf74f3997dfc159bf5ca36e48c3387c19082fc21871e0dbb19af
repoTags:
- registry.k8s.io/kube-proxy:v1.34.0
size: "73138071"
- id: 46169d968e9203e8b10debaf898210fe11c94b5864c351ea0f6fcf621f659bdc
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:31b77e40d737b6d3e3b19b4afd681c9362aef06353075895452fc9a41fe34140
- registry.k8s.io/kube-scheduler@sha256:8fbe6d18415c8af9b31e177f6e444985f3a87349e083fe6eadd36753dddb17ff
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.0
size: "53844823"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-393395
size: "4943877"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: 41f689c209100e6cadf3ce7fdd02035e90dbd1d586716bf8fc6ea55c365b2d81
repoDigests:
- docker.io/library/nginx@sha256:27637a97e3d1d0518adc2a877b60db3779970f19474b6e586ddcbc2d5500e285
- docker.io/library/nginx@sha256:d5f28ef21aabddd098f3dbc21fe5b7a7d7a184720bc07da0b6c9b9820e97f25e
repoTags:
- docker.io/library/nginx:latest
size: "196550530"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-393395 image ls --format yaml --alsologtostderr:
I0919 22:23:35.589721   62035 out.go:360] Setting OutFile to fd 1 ...
I0919 22:23:35.589980   62035 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:23:35.589988   62035 out.go:374] Setting ErrFile to fd 2...
I0919 22:23:35.589992   62035 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:23:35.590230   62035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
I0919 22:23:35.590790   62035 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:23:35.590877   62035 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:23:35.591236   62035 cli_runner.go:164] Run: docker container inspect functional-393395 --format={{.State.Status}}
I0919 22:23:35.614259   62035 ssh_runner.go:195] Run: systemctl --version
I0919 22:23:35.614352   62035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-393395
I0919 22:23:35.634078   62035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/functional-393395/id_rsa Username:docker}
I0919 22:23:35.729320   62035 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 ssh pgrep buildkitd: exit status 1 (257.101219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image build -t localhost/my-image:functional-393395 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-393395 image build -t localhost/my-image:functional-393395 testdata/build --alsologtostderr: (2.33220873s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-393395 image build -t localhost/my-image:functional-393395 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 0aaecb771b1
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-393395
--> a93a7554199
Successfully tagged localhost/my-image:functional-393395
a93a7554199b4a60be463b1ce9941426af8c56f70a54aac8fa738916766eb4af
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-393395 image build -t localhost/my-image:functional-393395 testdata/build --alsologtostderr:
I0919 22:23:36.080560   62211 out.go:360] Setting OutFile to fd 1 ...
I0919 22:23:36.080739   62211 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:23:36.080750   62211 out.go:374] Setting ErrFile to fd 2...
I0919 22:23:36.080754   62211 out.go:408] TERM=,COLORTERM=, which probably does not support color
I0919 22:23:36.080978   62211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
I0919 22:23:36.081605   62211 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:23:36.082285   62211 config.go:182] Loaded profile config "functional-393395": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
I0919 22:23:36.082673   62211 cli_runner.go:164] Run: docker container inspect functional-393395 --format={{.State.Status}}
I0919 22:23:36.101978   62211 ssh_runner.go:195] Run: systemctl --version
I0919 22:23:36.102034   62211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-393395
I0919 22:23:36.123600   62211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/functional-393395/id_rsa Username:docker}
I0919 22:23:36.218820   62211 build_images.go:161] Building image from path: /tmp/build.1574075284.tar
I0919 22:23:36.218885   62211 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0919 22:23:36.229014   62211 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1574075284.tar
I0919 22:23:36.233018   62211 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1574075284.tar: stat -c "%s %y" /var/lib/minikube/build/build.1574075284.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1574075284.tar': No such file or directory
I0919 22:23:36.233057   62211 ssh_runner.go:362] scp /tmp/build.1574075284.tar --> /var/lib/minikube/build/build.1574075284.tar (3072 bytes)
I0919 22:23:36.261852   62211 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1574075284
I0919 22:23:36.273839   62211 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1574075284 -xf /var/lib/minikube/build/build.1574075284.tar
I0919 22:23:36.285596   62211 crio.go:315] Building image: /var/lib/minikube/build/build.1574075284
I0919 22:23:36.285674   62211 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-393395 /var/lib/minikube/build/build.1574075284 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0919 22:23:38.341171   62211 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-393395 /var/lib/minikube/build/build.1574075284 --cgroup-manager=cgroupfs: (2.055475562s)
I0919 22:23:38.341232   62211 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1574075284
I0919 22:23:38.350747   62211 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1574075284.tar
I0919 22:23:38.360229   62211 build_images.go:217] Built localhost/my-image:functional-393395 from /tmp/build.1574075284.tar
I0919 22:23:38.360267   62211 build_images.go:133] succeeded building to: functional-393395
I0919 22:23:38.360273   62211 build_images.go:134] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image ls
E0919 22:24:36.193251   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:26:52.324568   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:27:20.034960   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 22:31:52.324325   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-393395
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image load --daemon kicbase/echo-server:functional-393395 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-linux-amd64 -p functional-393395 image load --daemon kicbase/echo-server:functional-393395 --alsologtostderr: (1.04174609s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image load --daemon kicbase/echo-server:functional-393395 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-393395
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image load --daemon kicbase/echo-server:functional-393395 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image save kicbase/echo-server:functional-393395 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image rm kicbase/echo-server:functional-393395 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:424: (dbg) Done: out/minikube-linux-amd64 -p functional-393395 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.443722473s)
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-393395
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 image save --daemon kicbase/echo-server:functional-393395 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-393395
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-393395 /tmp/TestFunctionalparallelMountCmdspecific-port658118038/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (296.726616ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:23:09.758452   18175 retry.go:31] will retry after 684.296669ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-393395 /tmp/TestFunctionalparallelMountCmdspecific-port658118038/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 ssh "sudo umount -f /mount-9p": exit status 1 (260.443769ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-393395 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-393395 /tmp/TestFunctionalparallelMountCmdspecific-port658118038/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-393395 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-393395 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-393395 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 58749: os: process already finished
helpers_test.go:519: unable to terminate pid 58580: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-393395 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-393395 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-393395 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [21408dad-e2ee-4370-bad1-7e7843fa1a1e] Pending
helpers_test.go:352: "nginx-svc" [21408dad-e2ee-4370-bad1-7e7843fa1a1e] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [21408dad-e2ee-4370-bad1-7e7843fa1a1e] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.003909474s
I0919 22:23:18.635743   18175 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.22s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-393395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup853016138/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-393395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup853016138/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-393395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup853016138/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-393395 ssh "findmnt -T" /mount1: exit status 1 (387.435416ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0919 22:23:11.875675   18175 retry.go:31] will retry after 496.320412ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-393395 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-393395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup853016138/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-393395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup853016138/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-393395 /tmp/TestFunctionalparallelMountCmdVerifyCleanup853016138/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.76s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-393395 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.1.252 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-393395 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-393395 service list: (1.693632386s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-393395 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-393395 service list -o json: (1.700313808s)
functional_test.go:1504: Took "1.700414657s" to run "out/minikube-linux-amd64 -p functional-393395 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-393395
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-393395
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-393395
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (112.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (1m51.687591772s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (112.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (5.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 kubectl -- rollout status deployment/busybox: (3.764920629s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-8s7jn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-c7qf4 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-rnjl7 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-8s7jn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-c7qf4 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-rnjl7 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-8s7jn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-c7qf4 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-rnjl7 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (5.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-8s7jn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-8s7jn -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-c7qf4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-c7qf4 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-rnjl7 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 kubectl -- exec busybox-7b57f96db7-rnjl7 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-984158 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.57s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (29.24s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-984158 stop --alsologtostderr -v 5: (29.12254901s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-984158 status --alsologtostderr -v 5: exit status 7 (118.082677ms)

                                                
                                                
-- stdout --
	ha-984158
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-984158-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-984158-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 22:46:12.102828  108832 out.go:360] Setting OutFile to fd 1 ...
	I0919 22:46:12.102950  108832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:46:12.102962  108832 out.go:374] Setting ErrFile to fd 2...
	I0919 22:46:12.102967  108832 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 22:46:12.103223  108832 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 22:46:12.103433  108832 out.go:368] Setting JSON to false
	I0919 22:46:12.103455  108832 mustload.go:65] Loading cluster: ha-984158
	I0919 22:46:12.103549  108832 notify.go:220] Checking for updates...
	I0919 22:46:12.103922  108832 config.go:182] Loaded profile config "ha-984158": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 22:46:12.103949  108832 status.go:174] checking status of ha-984158 ...
	I0919 22:46:12.104463  108832 cli_runner.go:164] Run: docker container inspect ha-984158 --format={{.State.Status}}
	I0919 22:46:12.126613  108832 status.go:371] ha-984158 host status = "Stopped" (err=<nil>)
	I0919 22:46:12.126643  108832 status.go:384] host is not running, skipping remaining checks
	I0919 22:46:12.126649  108832 status.go:176] ha-984158 status: &{Name:ha-984158 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:46:12.126676  108832 status.go:174] checking status of ha-984158-m02 ...
	I0919 22:46:12.126980  108832 cli_runner.go:164] Run: docker container inspect ha-984158-m02 --format={{.State.Status}}
	I0919 22:46:12.146942  108832 status.go:371] ha-984158-m02 host status = "Stopped" (err=<nil>)
	I0919 22:46:12.146995  108832 status.go:384] host is not running, skipping remaining checks
	I0919 22:46:12.147007  108832 status.go:176] ha-984158-m02 status: &{Name:ha-984158-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 22:46:12.147036  108832 status.go:174] checking status of ha-984158-m04 ...
	I0919 22:46:12.147397  108832 cli_runner.go:164] Run: docker container inspect ha-984158-m04 --format={{.State.Status}}
	I0919 22:46:12.166748  108832 status.go:371] ha-984158-m04 host status = "Stopped" (err=<nil>)
	I0919 22:46:12.166771  108832 status.go:384] host is not running, skipping remaining checks
	I0919 22:46:12.166777  108832 status.go:176] ha-984158-m04 status: &{Name:ha-984158-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (29.24s)

                                                
                                    
x
+
TestJSONOutput/start/Command (69.03s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-533431 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-533431 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m9.029580721s)
--- PASS: TestJSONOutput/start/Command (69.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.79s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-533431 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.79s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.65s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-533431 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.65s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.02s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-533431 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-533431 --output=json --user=testUser: (6.021281752s)
--- PASS: TestJSONOutput/stop/Command (6.02s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-583125 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-583125 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.00802ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"fced1be8-8440-4656-8afe-6974e89b13be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-583125] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ec488d50-9098-4074-9d8a-a8ba73eff1e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21594"}}
	{"specversion":"1.0","id":"21d44b99-8386-4210-b9e3-76592f858bb2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"704e1feb-3362-4ebc-a4e3-eb4161c895ff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig"}}
	{"specversion":"1.0","id":"7046796e-6410-4bae-bd22-ac7dcef95ec4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube"}}
	{"specversion":"1.0","id":"cbcfbf0a-bf76-47bf-b566-9e8e07f622b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"6ae8e475-5451-4d30-95d4-e5a20cd06db6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"9ab3c5c7-8b5a-4518-8af7-c4f6eb330e80","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-583125" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-583125
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (30.33s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-083925 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-083925 --network=: (28.126199909s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-083925" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-083925
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-083925: (2.178435362s)
--- PASS: TestKicCustomNetwork/create_custom_network (30.33s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.72s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-327050 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-327050 --network=bridge: (21.758091701s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-327050" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-327050
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-327050: (1.939822703s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.72s)

                                                
                                    
x
+
TestKicExistingNetwork (26.46s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0919 23:05:45.618012   18175 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0919 23:05:45.635973   18175 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0919 23:05:45.636047   18175 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0919 23:05:45.636072   18175 cli_runner.go:164] Run: docker network inspect existing-network
W0919 23:05:45.654805   18175 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0919 23:05:45.654842   18175 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0919 23:05:45.654860   18175 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0919 23:05:45.655003   18175 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0919 23:05:45.671957   18175 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-8b1b6c79ac61 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:3a:3e:90:cd:d5:3a} reservation:<nil>}
I0919 23:05:45.672376   18175 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000012ea0}
I0919 23:05:45.672417   18175 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0919 23:05:45.672462   18175 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0919 23:05:45.730881   18175 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-901727 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-901727 --network=existing-network: (24.309295914s)
helpers_test.go:175: Cleaning up "existing-network-901727" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-901727
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-901727: (1.997758209s)
I0919 23:06:12.056224   18175 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.46s)

                                                
                                    
x
+
TestKicCustomSubnet (24.48s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-882544 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-882544 --subnet=192.168.60.0/24: (22.315212787s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-882544 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-882544" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-882544
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-882544: (2.148431023s)
--- PASS: TestKicCustomSubnet (24.48s)

                                                
                                    
x
+
TestKicStaticIP (26.84s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-720738 --static-ip=192.168.200.200
E0919 23:06:52.324384   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-720738 --static-ip=192.168.200.200: (24.549122873s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-720738 ip
helpers_test.go:175: Cleaning up "static-ip-720738" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-720738
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-720738: (2.152482239s)
--- PASS: TestKicStaticIP (26.84s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (50.83s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-304336 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-304336 --driver=docker  --container-runtime=crio: (21.927465807s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-318997 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-318997 --driver=docker  --container-runtime=crio: (22.939782848s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-304336
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-318997
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-318997" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-318997
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-318997: (2.371128217s)
helpers_test.go:175: Cleaning up "first-304336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-304336
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-304336: (2.386810761s)
--- PASS: TestMinikubeProfile (50.83s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (5.7s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-002456 --memory=3072 --mount-string /tmp/TestMountStartserial3739252484/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
E0919 23:07:58.614611   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-002456 --memory=3072 --mount-string /tmp/TestMountStartserial3739252484/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.697967514s)
--- PASS: TestMountStart/serial/StartWithMountFirst (5.70s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-002456 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (5.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-017439 --memory=3072 --mount-string /tmp/TestMountStartserial3739252484/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-017439 --memory=3072 --mount-string /tmp/TestMountStartserial3739252484/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (4.71116966s)
--- PASS: TestMountStart/serial/StartWithMountSecond (5.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-017439 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-002456 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-002456 --alsologtostderr -v=5: (1.684513748s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-017439 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-017439
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-017439: (1.19596941s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.41s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-017439
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-017439: (6.414601671s)
--- PASS: TestMountStart/serial/RestartStopped (7.41s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-017439 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.26s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (65.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-409704 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-409704 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m5.450590151s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (65.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-409704 -- rollout status deployment/busybox: (3.153944057s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- exec busybox-7b57f96db7-jnjqc -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- exec busybox-7b57f96db7-qp9dg -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- exec busybox-7b57f96db7-jnjqc -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- exec busybox-7b57f96db7-qp9dg -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- exec busybox-7b57f96db7-jnjqc -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- exec busybox-7b57f96db7-qp9dg -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.62s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- exec busybox-7b57f96db7-jnjqc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- exec busybox-7b57f96db7-jnjqc -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- exec busybox-7b57f96db7-qp9dg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-409704 -- exec busybox-7b57f96db7-qp9dg -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.77s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-409704 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-409704 -v=5 --alsologtostderr: (53.139821733s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (53.77s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-409704 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.65s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.65s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.6s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp testdata/cp-test.txt multinode-409704:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp multinode-409704:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile621028500/001/cp-test_multinode-409704.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp multinode-409704:/home/docker/cp-test.txt multinode-409704-m02:/home/docker/cp-test_multinode-409704_multinode-409704-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m02 "sudo cat /home/docker/cp-test_multinode-409704_multinode-409704-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp multinode-409704:/home/docker/cp-test.txt multinode-409704-m03:/home/docker/cp-test_multinode-409704_multinode-409704-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m03 "sudo cat /home/docker/cp-test_multinode-409704_multinode-409704-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp testdata/cp-test.txt multinode-409704-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp multinode-409704-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile621028500/001/cp-test_multinode-409704-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp multinode-409704-m02:/home/docker/cp-test.txt multinode-409704:/home/docker/cp-test_multinode-409704-m02_multinode-409704.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704 "sudo cat /home/docker/cp-test_multinode-409704-m02_multinode-409704.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp multinode-409704-m02:/home/docker/cp-test.txt multinode-409704-m03:/home/docker/cp-test_multinode-409704-m02_multinode-409704-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m03 "sudo cat /home/docker/cp-test_multinode-409704-m02_multinode-409704-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp testdata/cp-test.txt multinode-409704-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp multinode-409704-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile621028500/001/cp-test_multinode-409704-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp multinode-409704-m03:/home/docker/cp-test.txt multinode-409704:/home/docker/cp-test_multinode-409704-m03_multinode-409704.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704 "sudo cat /home/docker/cp-test_multinode-409704-m03_multinode-409704.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 cp multinode-409704-m03:/home/docker/cp-test.txt multinode-409704-m02:/home/docker/cp-test_multinode-409704-m03_multinode-409704-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 ssh -n multinode-409704-m02 "sudo cat /home/docker/cp-test_multinode-409704-m03_multinode-409704-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.60s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-409704 node stop m03: (1.304000625s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-409704 status: exit status 7 (492.40697ms)

                                                
                                                
-- stdout --
	multinode-409704
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-409704-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-409704-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-409704 status --alsologtostderr: exit status 7 (478.414307ms)

                                                
                                                
-- stdout --
	multinode-409704
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-409704-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-409704-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:10:36.125885  173018 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:10:36.126206  173018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:10:36.126217  173018 out.go:374] Setting ErrFile to fd 2...
	I0919 23:10:36.126223  173018 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:10:36.126436  173018 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 23:10:36.126616  173018 out.go:368] Setting JSON to false
	I0919 23:10:36.126637  173018 mustload.go:65] Loading cluster: multinode-409704
	I0919 23:10:36.126678  173018 notify.go:220] Checking for updates...
	I0919 23:10:36.127017  173018 config.go:182] Loaded profile config "multinode-409704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:10:36.127055  173018 status.go:174] checking status of multinode-409704 ...
	I0919 23:10:36.127523  173018 cli_runner.go:164] Run: docker container inspect multinode-409704 --format={{.State.Status}}
	I0919 23:10:36.148202  173018 status.go:371] multinode-409704 host status = "Running" (err=<nil>)
	I0919 23:10:36.148233  173018 host.go:66] Checking if "multinode-409704" exists ...
	I0919 23:10:36.148478  173018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-409704
	I0919 23:10:36.166641  173018 host.go:66] Checking if "multinode-409704" exists ...
	I0919 23:10:36.166879  173018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:10:36.166931  173018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409704
	I0919 23:10:36.185059  173018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/multinode-409704/id_rsa Username:docker}
	I0919 23:10:36.277262  173018 ssh_runner.go:195] Run: systemctl --version
	I0919 23:10:36.281814  173018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:10:36.294387  173018 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:10:36.346166  173018 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-09-19 23:10:36.336987546 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:10:36.346834  173018 kubeconfig.go:125] found "multinode-409704" server: "https://192.168.67.2:8443"
	I0919 23:10:36.346868  173018 api_server.go:166] Checking apiserver status ...
	I0919 23:10:36.346915  173018 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0919 23:10:36.359247  173018 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup
	W0919 23:10:36.369166  173018 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0919 23:10:36.369220  173018 ssh_runner.go:195] Run: ls
	I0919 23:10:36.373011  173018 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0919 23:10:36.377425  173018 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0919 23:10:36.377449  173018 status.go:463] multinode-409704 apiserver status = Running (err=<nil>)
	I0919 23:10:36.377458  173018 status.go:176] multinode-409704 status: &{Name:multinode-409704 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 23:10:36.377473  173018 status.go:174] checking status of multinode-409704-m02 ...
	I0919 23:10:36.377760  173018 cli_runner.go:164] Run: docker container inspect multinode-409704-m02 --format={{.State.Status}}
	I0919 23:10:36.395463  173018 status.go:371] multinode-409704-m02 host status = "Running" (err=<nil>)
	I0919 23:10:36.395486  173018 host.go:66] Checking if "multinode-409704-m02" exists ...
	I0919 23:10:36.395721  173018 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-409704-m02
	I0919 23:10:36.413633  173018 host.go:66] Checking if "multinode-409704-m02" exists ...
	I0919 23:10:36.413886  173018 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0919 23:10:36.413919  173018 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-409704-m02
	I0919 23:10:36.431482  173018 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32913 SSHKeyPath:/home/jenkins/minikube-integration/21594-14668/.minikube/machines/multinode-409704-m02/id_rsa Username:docker}
	I0919 23:10:36.525322  173018 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0919 23:10:36.537245  173018 status.go:176] multinode-409704-m02 status: &{Name:multinode-409704-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0919 23:10:36.537286  173018 status.go:174] checking status of multinode-409704-m03 ...
	I0919 23:10:36.537556  173018 cli_runner.go:164] Run: docker container inspect multinode-409704-m03 --format={{.State.Status}}
	I0919 23:10:36.556240  173018 status.go:371] multinode-409704-m03 host status = "Stopped" (err=<nil>)
	I0919 23:10:36.556263  173018 status.go:384] host is not running, skipping remaining checks
	I0919 23:10:36.556269  173018 status.go:176] multinode-409704-m03 status: &{Name:multinode-409704-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-409704 node start m03 -v=5 --alsologtostderr: (6.988619517s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.69s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (80.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-409704
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-409704
E0919 23:11:01.684792   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-409704: (29.652117136s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-409704 --wait=true -v=5 --alsologtostderr
E0919 23:11:35.403909   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:11:52.324257   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-409704 --wait=true -v=5 --alsologtostderr: (51.224574698s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-409704
--- PASS: TestMultiNode/serial/RestartKeepsNodes (80.98s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-409704 node delete m03: (4.69610821s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (30.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-409704 stop: (30.3879267s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-409704 status: exit status 7 (94.046896ms)

                                                
                                                
-- stdout --
	multinode-409704
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-409704-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-409704 status --alsologtostderr: exit status 7 (89.21675ms)

                                                
                                                
-- stdout --
	multinode-409704
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-409704-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:12:41.038862  183226 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:12:41.039140  183226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:12:41.039148  183226 out.go:374] Setting ErrFile to fd 2...
	I0919 23:12:41.039152  183226 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:12:41.039324  183226 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 23:12:41.039490  183226 out.go:368] Setting JSON to false
	I0919 23:12:41.039524  183226 mustload.go:65] Loading cluster: multinode-409704
	I0919 23:12:41.039659  183226 notify.go:220] Checking for updates...
	I0919 23:12:41.039873  183226 config.go:182] Loaded profile config "multinode-409704": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:12:41.039896  183226 status.go:174] checking status of multinode-409704 ...
	I0919 23:12:41.040393  183226 cli_runner.go:164] Run: docker container inspect multinode-409704 --format={{.State.Status}}
	I0919 23:12:41.058901  183226 status.go:371] multinode-409704 host status = "Stopped" (err=<nil>)
	I0919 23:12:41.058925  183226 status.go:384] host is not running, skipping remaining checks
	I0919 23:12:41.058935  183226 status.go:176] multinode-409704 status: &{Name:multinode-409704 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0919 23:12:41.058976  183226 status.go:174] checking status of multinode-409704-m02 ...
	I0919 23:12:41.059241  183226 cli_runner.go:164] Run: docker container inspect multinode-409704-m02 --format={{.State.Status}}
	I0919 23:12:41.080945  183226 status.go:371] multinode-409704-m02 host status = "Stopped" (err=<nil>)
	I0919 23:12:41.080988  183226 status.go:384] host is not running, skipping remaining checks
	I0919 23:12:41.080997  183226 status.go:176] multinode-409704-m02 status: &{Name:multinode-409704-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (30.57s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (48.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-409704 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E0919 23:12:58.611447   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-409704 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (48.273004111s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-409704 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (48.88s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-409704
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-409704-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-409704-m02 --driver=docker  --container-runtime=crio: exit status 14 (72.403463ms)

                                                
                                                
-- stdout --
	* [multinode-409704-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-409704-m02' is duplicated with machine name 'multinode-409704-m02' in profile 'multinode-409704'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-409704-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-409704-m03 --driver=docker  --container-runtime=crio: (22.560414855s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-409704
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-409704: exit status 80 (285.50995ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-409704 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-409704-m03 already exists in multinode-409704-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-409704-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-409704-m03: (2.369336602s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.34s)

                                                
                                    
x
+
TestPreload (112.79s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-943659 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-943659 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (47.863236034s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-943659 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-943659 image pull gcr.io/k8s-minikube/busybox: (2.750358084s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-943659
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-943659: (5.877208744s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-943659 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-943659 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (53.653647401s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-943659 image list
helpers_test.go:175: Cleaning up "test-preload-943659" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-943659
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-943659: (2.420700121s)
--- PASS: TestPreload (112.79s)

                                                
                                    
x
+
TestScheduledStopUnix (100.36s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-997045 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-997045 --memory=3072 --driver=docker  --container-runtime=crio: (24.575058332s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-997045 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-997045 -n scheduled-stop-997045
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-997045 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0919 23:16:17.254932   18175 retry.go:31] will retry after 116.891µs: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.256161   18175 retry.go:31] will retry after 115.737µs: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.257317   18175 retry.go:31] will retry after 244.368µs: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.258431   18175 retry.go:31] will retry after 446.249µs: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.259603   18175 retry.go:31] will retry after 752.263µs: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.260756   18175 retry.go:31] will retry after 575.26µs: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.261906   18175 retry.go:31] will retry after 1.286403ms: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.264158   18175 retry.go:31] will retry after 1.368237ms: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.266385   18175 retry.go:31] will retry after 1.62885ms: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.268657   18175 retry.go:31] will retry after 2.050132ms: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.270956   18175 retry.go:31] will retry after 8.614153ms: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.280202   18175 retry.go:31] will retry after 8.160861ms: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.289440   18175 retry.go:31] will retry after 19.19147ms: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.309741   18175 retry.go:31] will retry after 15.737451ms: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
I0919 23:16:17.326022   18175 retry.go:31] will retry after 25.982338ms: open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/scheduled-stop-997045/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-997045 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-997045 -n scheduled-stop-997045
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-997045
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-997045 --schedule 15s
E0919 23:16:52.327505   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-997045
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-997045: exit status 7 (68.658107ms)

                                                
                                                
-- stdout --
	scheduled-stop-997045
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-997045 -n scheduled-stop-997045
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-997045 -n scheduled-stop-997045: exit status 7 (68.40026ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-997045" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-997045
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-997045: (4.398897974s)
--- PASS: TestScheduledStopUnix (100.36s)

                                                
                                    
x
+
TestInsufficientStorage (9.95s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-564263 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-564263 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.4217288s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"211f4404-fe6b-4e94-909d-69bb4d1409e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-564263] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"3595bc3a-1fcc-4619-80b8-829f29f4ecc9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21594"}}
	{"specversion":"1.0","id":"118b07da-384a-4da6-806d-0a87afa18383","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"c6dc685f-e3fa-4140-9eea-768a5a144e20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig"}}
	{"specversion":"1.0","id":"33a2c08a-8e10-4863-bc33-77063d9c8cff","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube"}}
	{"specversion":"1.0","id":"5c382a0f-e36e-4c84-b40b-76cea0f1b546","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"f0eacdd6-b6e5-46ed-a604-7c1eeb16b0c7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"64fbb578-d807-41be-bd1f-dd99549717e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"8369b70d-5493-4610-b572-083ffaf48fbd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"2a0d1863-2316-47a2-9a75-5affc0d59aeb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"201e0009-3015-4973-ae6d-2710318f7df1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"2dbe04d5-eace-4e34-9af7-8ebfa84abc0e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-564263\" primary control-plane node in \"insufficient-storage-564263\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"381756d1-bfd0-4e12-8701-3c085d2b0c00","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"6c0cd59f-8be3-4022-8bd4-2926d218661e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"ad4ed34d-3a2f-471c-b463-0649a53176d8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-564263 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-564263 --output=json --layout=cluster: exit status 7 (297.331557ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-564263","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-564263","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 23:17:40.334537  205723 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-564263" does not appear in /home/jenkins/minikube-integration/21594-14668/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-564263 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-564263 --output=json --layout=cluster: exit status 7 (295.267885ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-564263","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-564263","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0919 23:17:40.630153  205828 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-564263" does not appear in /home/jenkins/minikube-integration/21594-14668/kubeconfig
	E0919 23:17:40.642281  205828 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/insufficient-storage-564263/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-564263" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-564263
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-564263: (1.933836544s)
--- PASS: TestInsufficientStorage (9.95s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (47.97s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4254833958 start -p running-upgrade-543477 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4254833958 start -p running-upgrade-543477 --memory=3072 --vm-driver=docker  --container-runtime=crio: (22.81760812s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-543477 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-543477 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.941083396s)
helpers_test.go:175: Cleaning up "running-upgrade-543477" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-543477
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-543477: (2.593087756s)
--- PASS: TestRunningBinaryUpgrade (47.97s)

                                                
                                    
x
+
TestKubernetesUpgrade (333.65s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-496007 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-496007 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (23.76638328s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-496007
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-496007: (2.745104834s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-496007 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-496007 status --format={{.Host}}: exit status 7 (93.694061ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-496007 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-496007 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m57.16762553s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-496007 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-496007 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-496007 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (79.742642ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-496007] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.0 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-496007
	    minikube start -p kubernetes-upgrade-496007 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4960072 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.0, by running:
	    
	    minikube start -p kubernetes-upgrade-496007 --kubernetes-version=v1.34.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-496007 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-496007 --memory=3072 --kubernetes-version=v1.34.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.58124942s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-496007" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-496007
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-496007: (4.139712263s)
--- PASS: TestKubernetesUpgrade (333.65s)

                                                
                                    
x
+
TestMissingContainerUpgrade (82.65s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.747869627 start -p missing-upgrade-322300 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.747869627 start -p missing-upgrade-322300 --memory=3072 --driver=docker  --container-runtime=crio: (24.704382055s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-322300
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-322300: (10.446176655s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-322300
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-322300 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-322300 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (43.109652023s)
helpers_test.go:175: Cleaning up "missing-upgrade-322300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-322300
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-322300: (3.770473791s)
--- PASS: TestMissingContainerUpgrade (82.65s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-134986 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-134986 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (83.82184ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-134986] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.7s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.70s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (37.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-134986 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-134986 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (37.286075615s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-134986 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (37.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (62.61s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.390848160 start -p stopped-upgrade-161167 --memory=3072 --vm-driver=docker  --container-runtime=crio
E0919 23:17:58.616306   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.390848160 start -p stopped-upgrade-161167 --memory=3072 --vm-driver=docker  --container-runtime=crio: (41.678542906s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.390848160 -p stopped-upgrade-161167 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.390848160 -p stopped-upgrade-161167 stop: (4.452026438s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-161167 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-161167 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (16.479628115s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (62.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-134986 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-134986 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.047554763s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-134986 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-134986 status -o json: exit status 2 (315.860941ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-134986","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-134986
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-134986: (2.022472376s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (5.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-134986 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-134986 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (5.578687732s)
--- PASS: TestNoKubernetes/serial/Start (5.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-134986 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-134986 "sudo systemctl is-active --quiet service kubelet": exit status 1 (350.396449ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.35s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2.07s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (1.115029094s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.07s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-161167
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-161167: (1.058141948s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.06s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-134986
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-134986: (1.213174243s)
--- PASS: TestNoKubernetes/serial/Stop (1.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.12s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-134986 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-134986 --driver=docker  --container-runtime=crio: (7.124521183s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.12s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.3s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-134986 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-134986 "sudo systemctl is-active --quiet service kubelet": exit status 1 (296.315318ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.30s)

                                                
                                    
x
+
TestPause/serial/Start (41.54s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-089836 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-089836 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (41.534857709s)
--- PASS: TestPause/serial/Start (41.54s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.25s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-089836 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-089836 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.238555165s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.25s)

                                                
                                    
x
+
TestPause/serial/Pause (0.72s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-089836 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.72s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.32s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-089836 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-089836 --output=json --layout=cluster: exit status 2 (320.70175ms)

                                                
                                                
-- stdout --
	{"Name":"pause-089836","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-089836","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.32s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.69s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-089836 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.69s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-089836 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.75s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-089836 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-089836 --alsologtostderr -v=5: (2.749134065s)
--- PASS: TestPause/serial/DeletePaused (2.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-781969 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-781969 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (162.785663ms)

                                                
                                                
-- stdout --
	* [false-781969] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21594
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0919 23:19:50.089505  248671 out.go:360] Setting OutFile to fd 1 ...
	I0919 23:19:50.089795  248671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:19:50.089804  248671 out.go:374] Setting ErrFile to fd 2...
	I0919 23:19:50.089816  248671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I0919 23:19:50.090046  248671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21594-14668/.minikube/bin
	I0919 23:19:50.090735  248671 out.go:368] Setting JSON to false
	I0919 23:19:50.092038  248671 start.go:130] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":7340,"bootTime":1758316650,"procs":411,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1037-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0919 23:19:50.092173  248671 start.go:140] virtualization: kvm guest
	I0919 23:19:50.094420  248671 out.go:179] * [false-781969] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I0919 23:19:50.095984  248671 out.go:179]   - MINIKUBE_LOCATION=21594
	I0919 23:19:50.095992  248671 notify.go:220] Checking for updates...
	I0919 23:19:50.098734  248671 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0919 23:19:50.099961  248671 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21594-14668/kubeconfig
	I0919 23:19:50.104808  248671 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21594-14668/.minikube
	I0919 23:19:50.106204  248671 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0919 23:19:50.107773  248671 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I0919 23:19:50.109666  248671 config.go:182] Loaded profile config "cert-expiration-463082": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:19:50.109816  248671 config.go:182] Loaded profile config "missing-upgrade-322300": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I0919 23:19:50.109990  248671 config.go:182] Loaded profile config "pause-089836": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
	I0919 23:19:50.110155  248671 driver.go:421] Setting default libvirt URI to qemu:///system
	I0919 23:19:50.135491  248671 docker.go:123] docker version: linux-28.4.0:Docker Engine - Community
	I0919 23:19:50.135578  248671 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0919 23:19:50.195944  248671 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:81 SystemTime:2025-09-19 23:19:50.185719093 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:6.8.0-1037-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:
x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652183040 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:05044ec0a9a75232cad458027ca83437aae3f4da Expected:} RuncCommit:{ID:v1.2.5-0-g59923ef Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.28.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.39.4] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner (EXPERIMENTAL) Vendor:Docker Inc. Version:v0.1.39] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0919 23:19:50.196093  248671 docker.go:318] overlay module found
	I0919 23:19:50.198675  248671 out.go:179] * Using the docker driver based on user configuration
	I0919 23:19:50.200046  248671 start.go:304] selected driver: docker
	I0919 23:19:50.200066  248671 start.go:918] validating driver "docker" against <nil>
	I0919 23:19:50.200083  248671 start.go:929] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0919 23:19:50.201868  248671 out.go:203] 
	W0919 23:19:50.203208  248671 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0919 23:19:50.204379  248671 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-781969 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-781969" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-781969" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-463082
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-322300
contexts:
- context:
cluster: cert-expiration-463082
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-463082
name: cert-expiration-463082
- context:
cluster: missing-upgrade-322300
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-322300
name: missing-upgrade-322300
current-context: ""
kind: Config
users:
- name: cert-expiration-463082
user:
client-certificate: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/cert-expiration-463082/client.crt
client-key: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/cert-expiration-463082/client.key
- name: missing-upgrade-322300
user:
client-certificate: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/missing-upgrade-322300/client.crt
client-key: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/missing-upgrade-322300/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-781969

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-781969"

                                                
                                                
----------------------- debugLogs end: false-781969 [took: 2.928504424s] --------------------------------
helpers_test.go:175: Cleaning up "false-781969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-781969
--- PASS: TestNetworkPlugins/group/false (3.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (2.47s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (2.410643787s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-089836
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-089836: exit status 1 (18.962995ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-089836: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (2.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (54.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-131186 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-131186 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (54.488805533s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (54.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (52.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-042753 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-042753 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (52.285006008s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (52.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-131186 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [49e58efa-9edd-4dad-b8a8-4b36b050021d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [49e58efa-9edd-4dad-b8a8-4b36b050021d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003770725s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-131186 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-131186 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-131186 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-131186 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-131186 --alsologtostderr -v=3: (16.308271496s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131186 -n old-k8s-version-131186
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131186 -n old-k8s-version-131186: exit status 7 (68.330951ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-131186 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-131186 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-131186 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.079420733s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-131186 -n old-k8s-version-131186
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-042753 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [c75436c5-c887-4c09-b2b8-d2d922b17d09] Pending
helpers_test.go:352: "busybox" [c75436c5-c887-4c09-b2b8-d2d922b17d09] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [c75436c5-c887-4c09-b2b8-d2d922b17d09] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.003420434s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-042753 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-042753 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-042753 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-042753 --alsologtostderr -v=3
E0919 23:21:52.325276   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-042753 --alsologtostderr -v=3: (18.318012408s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-042753 -n no-preload-042753
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-042753 -n no-preload-042753: exit status 7 (73.731065ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-042753 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (53.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-042753 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-042753 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (53.417308134s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-042753 -n no-preload-042753
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (53.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-l4fsr" [210820b7-143b-457a-a435-00679d258623] Running
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-l4fsr" [210820b7-143b-457a-a435-00679d258623] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003806076s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-l4fsr" [210820b7-143b-457a-a435-00679d258623] Running / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-l4fsr" [210820b7-143b-457a-a435-00679d258623] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003890926s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-131186 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-131186 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-131186 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131186 -n old-k8s-version-131186
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131186 -n old-k8s-version-131186: exit status 2 (301.714851ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-131186 -n old-k8s-version-131186
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-131186 -n old-k8s-version-131186: exit status 2 (301.43389ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-131186 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-131186 -n old-k8s-version-131186
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-131186 -n old-k8s-version-131186
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.89s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (72.13s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-756077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-756077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m12.128834277s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (72.13s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-523696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-523696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (1m10.262693755s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (70.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hdlqb" [2699b7f0-b231-4ebc-ac45-9241b863aa0d] Running
E0919 23:22:58.611316   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002965403s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-hdlqb" [2699b7f0-b231-4ebc-ac45-9241b863aa0d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00428908s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-042753 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-042753 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (26.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-734532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-734532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (26.493306556s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (26.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-756077 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [ae885cc7-f32d-4c22-a77e-ad654185392c] Pending
helpers_test.go:352: "busybox" [ae885cc7-f32d-4c22-a77e-ad654185392c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [ae885cc7-f32d-4c22-a77e-ad654185392c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004239455s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-756077 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-734532 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.85s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.4s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-734532 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-734532 --alsologtostderr -v=3: (2.403292772s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.40s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734532 -n newest-cni-734532
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734532 -n newest-cni-734532: exit status 7 (74.766525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-734532 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-734532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-734532 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (11.60884908s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-734532 -n newest-cni-734532
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-756077 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-756077 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (18.37s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-756077 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-756077 --alsologtostderr -v=3: (18.372276278s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (18.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-523696 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [506baaa4-bb63-4524-b6b1-cf817a8f5410] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [506baaa4-bb63-4524-b6b1-cf817a8f5410] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.004247461s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-523696 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-734532 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-734532 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-734532 -n newest-cni-734532
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-734532 -n newest-cni-734532: exit status 2 (309.600787ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-734532 -n newest-cni-734532
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-734532 -n newest-cni-734532: exit status 2 (299.023252ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-734532 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-734532 -n newest-cni-734532
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-734532 -n newest-cni-734532
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.66s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (70.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (1m10.466428717s)
--- PASS: TestNetworkPlugins/group/auto/Start (70.47s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-523696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-523696 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (2.626981526s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-523696 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-523696 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-523696 --alsologtostderr -v=3: (17.186539101s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756077 -n embed-certs-756077
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756077 -n embed-certs-756077: exit status 7 (80.008442ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-756077 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.77s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-756077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-756077 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (48.425166555s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-756077 -n embed-certs-756077
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.77s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-523696 -n default-k8s-diff-port-523696
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-523696 -n default-k8s-diff-port-523696: exit status 7 (78.778409ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-523696 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-523696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-523696 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.0: (50.010362681s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-523696 -n default-k8s-diff-port-523696
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-x74ct" [cd4a0c8d-672e-4e4e-883e-c2d1530e9833] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003892028s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-x74ct" [cd4a0c8d-672e-4e4e-883e-c2d1530e9833] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003983971s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-756077 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-756077 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-781969 "pgrep -a kubelet"
I0919 23:25:14.684131   18175 config.go:182] Loaded profile config "auto-781969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-781969 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2s779" [ab603836-6b30-4bc3-bca5-c512a65226ef] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2s779" [ab603836-6b30-4bc3-bca5-c512a65226ef] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004493104s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gnk7z" [2baad272-1f1d-46d1-886a-923f73cde390] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00334272s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (47.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (47.240928693s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (47.24s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-gnk7z" [2baad272-1f1d-46d1-886a-923f73cde390] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003733331s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-523696 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-781969 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-523696 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (57.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (57.57630676s)
--- PASS: TestNetworkPlugins/group/calico/Start (57.58s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-523696 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-amd64 pause -p default-k8s-diff-port-523696 --alsologtostderr -v=1: (1.045620484s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-523696 -n default-k8s-diff-port-523696
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-523696 -n default-k8s-diff-port-523696: exit status 2 (335.778312ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-523696 -n default-k8s-diff-port-523696
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-523696 -n default-k8s-diff-port-523696: exit status 2 (331.054803ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-523696 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-523696 -n default-k8s-diff-port-523696
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-523696 -n default-k8s-diff-port-523696
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.28s)
E0919 23:27:13.077166   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:27:13.563676   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.06s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (50.061798796s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.06s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (100.67s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E0919 23:25:51.619560   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:51.625966   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:51.637440   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:51.658962   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:51.700373   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:51.784237   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:51.945833   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:52.267827   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:52.909896   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:54.192975   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:25:56.755672   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:26:01.877317   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m40.674772955s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (100.67s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-dj2sq" [bed688d0-ec76-42e0-ba85-e01b02c7f49c] Running
E0919 23:26:12.119662   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004588842s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-781969 "pgrep -a kubelet"
I0919 23:26:15.216965   18175 config.go:182] Loaded profile config "kindnet-781969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-781969 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-cln94" [be2c614d-38cd-4ca0-9fc7-b1b0bf859bab] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-cln94" [be2c614d-38cd-4ca0-9fc7-b1b0bf859bab] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.003949306s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-781969 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-781969 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-9t4v7" [245bb53b-cee0-445a-8d05-65179c2d9cb0] Running
I0919 23:26:25.634046   18175 config.go:182] Loaded profile config "custom-flannel-781969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003894641s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-781969 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mwp5p" [65db1060-0054-4ae6-9f1b-57e681985d4c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mwp5p" [65db1060-0054-4ae6-9f1b-57e681985d4c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.004246064s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-781969 "pgrep -a kubelet"
I0919 23:26:31.827808   18175 config.go:182] Loaded profile config "calico-781969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-781969 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
E0919 23:26:32.097572   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rt47w" [31872187-1e90-4c51-8f39-bf32dcabed26] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 23:26:32.103967   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:26:32.115405   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:26:32.136871   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:26:32.178136   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:26:32.260179   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:26:32.421840   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:26:32.601932   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/old-k8s-version-131186/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:26:32.744737   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:26:33.386561   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:26:34.668434   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-rt47w" [31872187-1e90-4c51-8f39-bf32dcabed26] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.005199554s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-781969 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-781969 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (49.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
E0919 23:26:52.324542   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/addons-120954/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E0919 23:26:52.595513   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/no-preload-042753/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (49.853517282s)
--- PASS: TestNetworkPlugins/group/flannel/Start (49.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (63.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-781969 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m3.588375479s)
--- PASS: TestNetworkPlugins/group/bridge/Start (63.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-781969 "pgrep -a kubelet"
I0919 23:27:27.610849   18175 config.go:182] Loaded profile config "enable-default-cni-781969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-781969 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-mqcpd" [8e7c8d7b-00a0-4326-8477-c27e2dde62f2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-mqcpd" [8e7c8d7b-00a0-4326-8477-c27e2dde62f2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.004333154s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-86hhp" [90634d23-a0a5-4960-b877-4df538f04dac] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004176534s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-781969 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-781969 "pgrep -a kubelet"
I0919 23:27:40.895769   18175 config.go:182] Loaded profile config "flannel-781969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-781969 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hcq8w" [ef033c3f-4e1d-4fe5-a18f-add6331b5869] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0919 23:27:41.686527   18175 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/functional-393395/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-hcq8w" [ef033c3f-4e1d-4fe5-a18f-add6331b5869] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.006165781s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-781969 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-781969 "pgrep -a kubelet"
I0919 23:28:03.089419   18175 config.go:182] Loaded profile config "bridge-781969": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.0
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-781969 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-jjsls" [072a478a-76f6-4951-95b5-02e81e9c60c5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-jjsls" [072a478a-76f6-4951-95b5-02e81e9c60c5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.00443396s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-781969 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-781969 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.12s)

                                                
                                    

Test skip (27/329)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.0/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-120954 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.28s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:114: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:178: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-815969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-815969
--- SKIP: TestStartStop/group/disable-driver-mounts (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-781969 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-781969" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-781969" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-463082
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-322300
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: pause-089836
contexts:
- context:
cluster: cert-expiration-463082
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-463082
name: cert-expiration-463082
- context:
cluster: missing-upgrade-322300
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-322300
name: missing-upgrade-322300
- context:
cluster: pause-089836
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: pause-089836
name: pause-089836
current-context: pause-089836
kind: Config
users:
- name: cert-expiration-463082
user:
client-certificate: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/cert-expiration-463082/client.crt
client-key: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/cert-expiration-463082/client.key
- name: missing-upgrade-322300
user:
client-certificate: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/missing-upgrade-322300/client.crt
client-key: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/missing-upgrade-322300/client.key
- name: pause-089836
user:
client-certificate: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/pause-089836/client.crt
client-key: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/pause-089836/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-781969

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-781969"

                                                
                                                
----------------------- debugLogs end: kubenet-781969 [took: 3.051136714s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-781969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-781969
--- SKIP: TestNetworkPlugins/group/kubenet (3.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-781969 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-781969" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: cert-expiration-463082
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21594-14668/.minikube/ca.crt
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.85.2:8443
name: missing-upgrade-322300
contexts:
- context:
cluster: cert-expiration-463082
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:35 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-463082
name: cert-expiration-463082
- context:
cluster: missing-upgrade-322300
extensions:
- extension:
last-update: Fri, 19 Sep 2025 23:19:41 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-322300
name: missing-upgrade-322300
current-context: ""
kind: Config
users:
- name: cert-expiration-463082
user:
client-certificate: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/cert-expiration-463082/client.crt
client-key: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/cert-expiration-463082/client.key
- name: missing-upgrade-322300
user:
client-certificate: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/missing-upgrade-322300/client.crt
client-key: /home/jenkins/minikube-integration/21594-14668/.minikube/profiles/missing-upgrade-322300/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-781969

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-781969" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-781969"

                                                
                                                
----------------------- debugLogs end: cilium-781969 [took: 3.476723845s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-781969" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-781969
--- SKIP: TestNetworkPlugins/group/cilium (3.65s)

                                                
                                    
Copied to clipboard